18 nov 2011

Sternberg

Chapter 12



Decision Making and Reasoning

reproduce outside a clinical setting. This criticism has led to the development of a field of study that is based on decision making in natural environments. Much of the research completed in this area is from professional settings, such as hospitals or nu­ clear plants (Carroll, Hatakenaka, & Rudolph, 2006; Galanter & Patel, 2005; Roswarski, & Murray, 2006) . These situations share a number of features, including encountering ill-structured problems, changing situations, high risk, time pressure, and sometimes, a team environment (Orasanu & Connolly, 1 993 ) . A number of models are used to explain performance in these high-stakes situations. These models allow for the consideration of cognitive, emotional, and situational factors of skilled decision makers; they also provide a framework for advising future decision makers (Klein, 1 997; Lipshitz & associates, 200 1 ) . For instance, Orasanu ( 2005 ) developed recommendations for training astronauts to be successful decision makers by evaluat­ ing what makes current astronauts successful. Naturalistic decision making can be applied to a broad range of behaviors and environments . These applications can in­ clude individuals as diverse as badminton players, railroad controllers, and NASA astronauts (Farrington-Darby & associates, 2006; Macquet & Fleurance, 2007; Orasanu, 2005; Patel, Kaufman, & Arocha, 2002).

Group Decision Making
Working as a group can enhance the effectiveness of decision making, just as it can enhance the effectiveness of problem solving. Many companies combine individuals into teams to improve decision making (Cannon-Bowers & Salas, 1 998). By forming decision-making teams, the group benefits from the expertise of each of the members. There is also an increase in resources and ideas (Salas, Burke, & Cannon-Bowers, 2000 ) . Another benefit of group decision making is improved group memory over individual memory (Hinsz, 1 990). Groups that are successful in decision making ex­ hibit a number of similar characteristics, including the following: the group is small; it has open communication; and members share a common mind set, identify with the group, and agree on acceptable group behavior (Shelton, 2006) . In juries, members share more information during decision making when the group is made up of diverse members ( Sommers, 2006) . The juries are thereby in a position to make better decisions. Furthermore, in examining decision making in public policy groups, it has been noted that interpersonal influence is important (Jen­ son, 2007 ). Group members frequently employed tactics to affect other members' decisions (Jenson, 2007 ) . The most frequently used and influential tactics were inspi­ rational and rational appeals. However, there are some disadvantages associated with group decision making. Of these disadvantages, one of the most explored is group­ think. Groupthink is a phenomena characterized by premature decision making that is generally the result of group members attempting to avoid conflict (Janis, 1 97 1 ) . Groupthink frequently results in suboptimum decision making that avoids nontradi­ tional ideas ( Esser, 1 998) . One major cause of groupthink is anxiety (Chapman, 2006). When group members are anxious, they are less likely to explore new options and will likely try to avoid further conflict. What conditions lead to groupthink? Janis cited three kinds: ( 1 ) An isolated, cohesive, and homogeneous group is empowered to make decisions; ( 2 ) objective and impartial leadership is absent, within the group or outside it; and ( 3 ) high levels of


Judgment and Decision Making

487

stress impinge on the group decision-making process. The groups responsible for mak­ ing foreign-policy decisions are excellent candidates for groupthink. They are usually like minded. Moreover, they frequently isolate themselves from what is going on outside their own group. They generally try to meet specific objectives and believe they cannot afford to be impartial. Also, of course, they are under very high stress because the stakes involved in their decisions can be tremendous.
Six Symptoms of Groupthink Janis further delineated six symptoms of groupthink. ( 1 ) In closed-mindedness , the group is not open to alternative ideas . ( 2 ) In rationalization, the group goes to great lengths to j ustify both the process and the product of its decision making, distorting reality where necessary in order to be persuasive. ( 3 ) In the squelching of dissent, those who disagree are ignored, criticized, or even ostracized. (4) In the formation of a "mindguard" for the group, one person appoints himself or herself the keeper of the group norm and ensures that people stay in line. ( 5 ) In feeling invulnerable , the group believes that it must be right, given the intelligence of its members and the information available to them. ( 6 ) In feeling unanimous , members believe that ev­ eryone unanimously shares the opinions expressed by the group. Defective decision making results from groupthink, which in turn is due to examining alternatives insufficiently, examining risks inadequately, and seeking information about alterna­ tives incompletely. Consider how groupthink might arise in a decision when college students decide to damage a statue on the campus of a football rival to teach a lesson to the students and faculty in the rival university. The students rationalize that damage to a statue really is no big deal. Who cares about an old ugly statue anyway? When one group member dissents, other members quickly make him feel disloyal and cowardly. His dissent is squelched. The group's members feel invulnerable. They are going to dam­ age the statue under the cover of darkness, and the statue is never guarded. They are sure they will not be caught. Finally, all the members agree on the course of action. This apparent feeling of unanimity convinces the group members that far from being out of line, they are doing what needs to be done. Antidotes for Groupthink Janis has prescribed several antidotes for groupthink. For example, the leader of a group should encourage constructive criticism, be impartial, and ensure that mem­ bers seek input from people outside the group. The group should also form sub­ groups that meet separately to consider alternative solutions to a single problem. It is important that the leader take respons ibility for preventing spurious conformity to a group norm. In 1997, members of the Heaven's Gate cult in California committed mass sui­ cide in the hope of meeting up with extraterrestrials in a spaceship trailing the Hale­ Bopp comet. Although this group suicide is a striking example of conformity to a destructive group norm, similar events have occurred throughout human history, such as the suicide of more than 900 members of the Jonestown, Guyana, religious cult in 1978. Worse was the murder in 2000 of hundreds of individuals in Uganda by leaders of a cult that the individuals had joined. And even in the twenty-first century, suicide bombers are killing themselves and others in carefully planned attacks.


I

488

Chapter

12



Decision Making and Reasoning

Heuristics and Biases
People make many decisions based on biases and heuristics (shortcuts) in their think­ ing (Kahneman & Tversky, 1972, 1 990; Stanovich, Sai, & West, 2004; Tversky & Kahneman, 1 9 7 1 , 1 993 ) . These mental shortcuts lighten the cognitive load of mak­ ing decisions, but they also allow for a much greater chance of error.
Representativeness Before you read about representativeness, try the following problem from Kahneman and Tversky ( 1 972).

All the families having exactly six children in a particular city were surveyed. In n of the families, the exact order of births of boys and girls was G B G B B G (G, girl; B, boy). What is your estimate of the number of families surveyed in which the exact order of b irths was B G B B B B?

Most people judging the number of families with the B G B B B B birth pattern estimate the number to be less than 72. Actually, the best estimate of the number of families with this birth order is 72, the same as for the G B G B B G birth order. The expected number for the second pattern would be the same because the gender for each birth is independent (at least, theoretically) of the gender for every other birth. For any one birth, the chance of a boy (or a girl) is one of two. Thus, any particular pattern of births is equally likely ( 1/2 )6, even B B B B B B or G G G G G G. Why do many of us believe some birth orders to be more likely than others ? In part, the reason is that we use the heuristic of representativeness. In representative­ ness, we judge the probability of an uncertain event according to ( 1 ) how obviously it is similar to or representative of the population from which it is derived and ( 2 ) the degree to which it reflects the salient features of the process by which it is generated (such as randomness ) (see also Fischhoff, 1 999; Johnson-Laird, 2000, 2004) . For ex­ ample, people believe that the first birth order is more likely because ( 1 ) it is more representative of the number of females and males in the population and ( 2 ) it looks more random than the second birth order. In fact, of course, either birth order is equally likely to occur by chance. Similarly, suppose people are asked to judge the probability of flips of a coin yield­ ing the sequence H T H H T H (H, heads; T, tails ) . Most people will judge it as higher than they will if asked to judge the sequence H H H H T H. If you expect a sequence to be random, you tend to view as more likely a sequence that "looks random." In­ deed, people often comment that the numbers in a table of random numbers "don't look random." The reason is that people underestimate the number of runs of the same number that will appear wholly by chance. We frequently reason in terms of whether something appears to represent a set of accidental occurrences, rather than actually considering the true likelihood of a given chance occurrence. This tendency makes us more vulnerable to the machinations of magicians, charlatans, and con art­ ists. Any of them may make much of their having predicted the realistic probability of a nonrandom-looking event. For example, the odds are 9 to 1 that two people in a


I

Judgment and Decision Making

489

People often mistakenly believe in the gambler's fallacy. They think that if they have been unlucky in their gambles , it is time for their ailure in past gambles has no luck to change . In fact, success or f effect on the likelihood of success in future ones .

Research shows that the "hot hand" effect is in our minds, not in the players' games . Making a past shot does not increase the player's chance of making fu­ ture shots.

group of 40 (e.g., in a classroom or a small nightclub audience) will share a birthday (the same month and day, not necessarily the same year) . In a group of 1 4 people, there are better than even odds that two people will have birthdays within a day of each other ( Krantz, 1 992). Another example of the representativeness heuristic is the gambler's fallacy. Gambler's fallacy is a mistaken belief that the probability of a given random event, such as winning or losing at a game of chance, is influenced by previous random events. For example, a gambler who loses five successive bets may believe that a win is therefore more likely the sixth time. He feels that he is "due" to win. In truth, of course, each bet (or coin toss ) is an independent event. It has an equal probability of winning or losing. The gambler is no more likely to win on the 6th bet than on the Ist-or on the lOOlst. A related fallacy is the misguided belief in the "hot hand" or the "streak shooter" in basketball. Apparently, both professional and amateur basketball players, as well as their fans, believe that a player's chances of making a basket are greater after making a previous shot than after missing one. However, the statistical likelihoods (and the actual records of players ) show no such tendency (Gilovich, Vallone, & Tversky, 1 985 ) . Shrewd players will take advantage of this belief and will closely guard oppo-


490

Chapter

12



Decision Making and Reasoning

nents immediately after they have made baskets. The reason is that the opposing players will be more likely to try to get the ball to these perceived "streak shooters." That we frequently rely on the representativeness heuristic may not be terribly surprising. It is easy to use and often works. For example, suppose we have not heard a weather report prior to stepping outside. We informally judge the probability that it will rain. We base our judgment on how well the characteristics of this day ( e.g., the month of the year, the areas in which we live, and the presence or absence of clouds in the sky) represent the characteristics of days on which it rains. Another reason that we often use the representativeness heuristic is that we mistakenly believe that small samples (e.g., of events, of people, of characteristics ) resemble in all respects the whole population from which the sample is drawn (Tversky & Kahneman, 1 9 7 1 ) . We particularly tend to underestimate the likelihood that the characteristics of a small sample (e.g., the people whom we know well) of a population inadequately represent the characteristics of the whole population. We also tend to use the representativeness heuristic more frequently when we are highly aware of anecdotal evidence based on a very small sample of the population. This reliance on anecdotal evidence has been referred to as a "man-who" argument (Nisbett & Ross, 1 980). When presented with statistics, we may refute those data with our own observations of, "I know a man who . . . " For example, faced with sta­ tistics on coronary disease and high-cholesterol diets, someone may counter with, "I know a man who ate whipped cream for breakfast, lunch, and dinner and lived to be 1 10 years old. He would have kept going but he was shot through his perfectly healthy heart by a j ealous lover." One reason that people misguidedly use the representativeness heuristic is be­ cause they fail to understand the concept of base rates. Base rate refers to the preva­ lence of an event or characteristic within its population of events or characteristics. In everyday decision making, people often ignore base-rate information, but it is im­ portant to effective judgment and decision making. In many occupations, the use of base-rate information is essential for adequate job performance. For example, suppose a doctor were told that a lO-year-old boy was suffering chest pains. The doctor would be much less likely to worry about an incipient heart attack than if told that a 50-year-old man had the identical symptom. Why ? Because the base rate of heart at­ tacks is much higher in 50-year-old men than in lO-year-old boys. Of course, people use other heuristics as well. People can be taught how to use base rates to improve their decision making (Gigerenzer, 1 996; Koehler, 1 996).
Availability Most of us at least occasionally use the availability heuristic, in which we make judg­ ments on the basis of how easily we can call to mind what we perceive as relevant instances of a phenomenon (Tversky & Kahneman, 1973; see also Fischhoff, 1 999; Sternberg, 2000) . For example, consider the letter R. Are there more words in the English language that begin with the letter R or that have R as their third letter? Most respondents say that there are more words beginning with the letter R (Tversky & Kahneman, 1973 ) . Why? Because generating words beginning with the letter R is easier than generating words having R as the third letter. In fact, there are more English-language words with R as their third letter. The same happens to be true of some other letters as well, such as K, L, N, and V.


Judgment and Decision Making

49 1

The availability heuristic also has been observed in regard to everyday situations. In one study, married partners individually stated which of the two partners performed a larger proportion of each of 20 different household chores (Ross & Sicoly, 1979). These tasks included mundane chores such as grocery shopping or preparing breakfast. Each partner stated that he or she more often performed about 16 of the 20 chores. Suppose each partner was correct. Then, to accomplish 1 00% of the work in a house­ hold, each partner would have to perform 80% of the work. Similar outcomes emerged from questioning of members of college basketball teams and joint participants in labo­ ratory tasks. For all participants, the greater availability of their own actions made it seem that each had performed a greater proportion of the work in joint enterprises. Although clearly 80% + 80% does not equal 1 00%, we can understand why people may engage in using the availability heuristic when it confirms their beliefs about themselves. However, people also use the availability heuristic when its use leads to a logical fallacy that has nothing to do with their beliefs about themselves. Two groups of participants were asked to estimate the number of words of a particular form that would be expected to appear in a 2000-word passage. For one group the form was ____ (i.e., seven letters ending in -ing). For the other group the form ing was _____ (Le., seven letters with n as the second-to-the-last letter) . Clearly, n_ there cannot be more seven-letter words ending in -ing than seven-letter words with n as the second-to-the-last letter. But the greater availability of the former led to es­ timates of probability that were more than twice as high for the former, as compared with the latter (T versky & Kahneman, 1 983 ) . This example illustrates how the avail­ ability heuristic might lead to the conjunction fallacy. In the conjunction fallacy, an individual gives a higher estimate for a subset of events (e.g., the instances of -ing) than for the larger set of events containing the given subset (e.g., the instances of n as the second-to-the-last letter). This fallacy also is illustrated in the chapter-opening vignette regarding Linda. The representativeness heuristic may also induce individuals to engage in the conjunction fallacy during probabilistic reasoning (Tversky & Kahneman, 1 983; see also Dawes, 2000). Tversky and Kahneman asked college students:
Please give your estimate of the following values: What percentage of the men surveyed [in a health survey] have had one or more heart attacks? What percentage of the men surveyed both are over 55 years old and have had one or more heart attacks? (p. 308)

The mean estimates were 18% for the former and 30% for the latter. In fact, 65% of the respondents gave higher estimates for the latter (which is clearly a subset of the former) . However, people do not always engage in the conjunction fallacy. Only 25% of respondents gave higher estimates for the latter question than for the former when the questions were rephrased as frequencies rather than as percentages. In other words, the questions were in terms of numbers of individuals within a given sample of the population. Also, the researchers found that the conjunction fallacy was less likely when the conjunctions are defined by the intersection of concrete classes than by a combination of properties. For example, they were less likely for types of objects or individuals such as dogs or beagles than they were for features of objects or indi-


492

Chapter 12



Decision Making and Reasoning

viduals such as conservatism or feminism (Tversky & Kahneman, 1983). Althllugh classes and properties are equivalent from a logical standpoint, they generate different mental representations (Stenning & Monaghan, 2004). Different rules and relations are used in each of the two cases. Thus , the formal equivalence of properties to classes is not intuitively obvious to most people (Tversky & Kahneman, 19 A variant of the conjunction fallacy is the inclusion fallacy. In the indus ,., the individual judges a greater likelihood that every member of an inclusive category has a particular characteristic than that every member of a subset of the inclusive cat­ egory has that characteristic-that is, dogs ( inclusive) and beagles (subset) (Shahr, Osherson, & Smith, 1990). For example, participants judged a much greater likelihood that "every single lawyer" ( i.e., every lawyer) is conservative than that every single labor-union lawyer is conservative. According to the researchers, we tend to judge the likelihood that the members of a particular class (e.g., lawyers) or subclass (e.g., labor­ union lawyers) of individuals will demonstrate a particular characteristic (e.g., conser­ vatism) based on the perceived typicality ( i.e., representativeness ) of the given charac­ teristic for the given category. For example, based on the characteristics of Sony televisions, participants were asked to judge features of either Sony camcorders or bi­ cycles (Joiner & Loken, 1998). Participants committed the inclusion fallacy more for the camcorders than for the bicycles (Joiner & Loken, 1998). This is likely because, in this case, camcorders are more representative of the type of product produced by Sony. We should, however, judge likelihood based on statistical probability. Heuristics such as representativeness and availability do not always lead to wrong judgments or poor decisions. Indeed, we use these mental shortcuts because they are so often right. For example, one of the factors that leads to the greater availability of an event is in fact the greater frequency of the event. However, availability also may be influenced by recency of presentation (as in implicit-memory cueing, mentioned in Chapter 5 ) , unusualness, or distinctive salience of a particular event or event cat­ egory for the individual. Nonetheless, when the available information is not biased for some reason, the instances that are most available are generally the most common ones. Examples of biased coverage might be sensationalized press coverage, extensive advertising, recency of an uncommon occurrence, or personal prejudices. We gener­ ally make decisions in which the most common instances are the most relevant and valuable ories. In such cases, the availability heuristic is often a convenient shortcut with few costs. However, when particular instances are better recalled because of bi­ ases (e.g. , your views of your own behavior, in comparison with that of other people) , the availability heuristic may lead to less than optimal decisions.
Other Judgment Phenomena A heuristic related to availability is the anchoring-and-adjustment heuristic, by which people adjust their evaluations of things by means of certain reference points called end-anchors. Before you read on, quickly ( in less than 5 seconds) calculate in your head the answer to the following problem:
8x7x6x5X4x3x2Xl

Now, quickly calculate your answer to the following problem:
lx2x3x4x5x6x7x8


I

Judgment and Decision Making

493

Although riding in a car is statistically much more risky than riding in a plane, people often feel less safe in a plane , in part because of the availability heuristic. People hear about every major U.S. plane crash that takes place , but they hear about relatively few car accidents .


494

Chapter

12



Decision Making and Reasoning

Two groups of participants estimated the product of one or the other of the preced­ ing two sets of eight numbers (Tversky & Kahneman, 1 974). The median (middle) esti­ mate for the participants given the first sequence was 2250. For the participants given the second sequence, the median estimate was 5 1 2. (The actual product is 40,320 for both.) The two products are the same, as they must be because the numbers are exactly the same (applying the commutative law of multiplication). Nonetheless, people provide a higher estimate for the first sequence than for the second. The reason is that their computation of the anchor-the first few digits multiplied by each other-renders a higher estimate from which they make an adjustment to reach a final estimate. Another consideration in decision theory is the influence of framing effects, in which the way that the options are presented influences the selection of an option (Tversky & Kahneman, 198 1 ) . For instance, we tend to choose options that demon­ strate risk aversion when we are faced with an option involving potential gains. That is, we tend to choose options offering a small but certain gain rather than a larger but uncertain gain, unless the uncertain gain is either tremendously greater or only mod­ estly less than certain. The example in the "Investigating Cognitive Psychology" box is only slightly modified from one used by Tversky and Kahneman ( 1 98 1 ) .

INVESTIGA11NG

COGNt1NE ,PSYCHOlOGY
,
.;,,'/( ii;,"",," ,

Suppose that you were told that 600 people were at risk of dying of a particular disease. Vaccine A could save the lives of 200 of the people at risk. For Vaccine B, there is a 0.33 likelihood that all 600 people would be saved, but there is a 0.66 likelihood that all 600 people will die. Which option would you choose?

We tend to choose options that demonstrate risk seeking when we are faced with options involving potential losses. That is, we tend to choose options offering a large but uncertain loss rather than a smaller but certain loss, unless the uncertain loss is either tremendously greater or only modestly less than certain. The next "Investigat­ ing Cognitive Psychology" box provides an interesting example.
INVESTlGAlING

CooNmvE PSYCHOLOGY'
e

'

Suppose that for the 600 people at risk of dying of a particular disease, if Vaccine C is used, 400 people will die. However, if Vaccine D is used, there is a 0.33 likelihood that no one will die and a 0.66 likelihood that all 600 people will die. Which option ' would you choose? In the preceding situations, most people will choose Vaccine A and Vaccine D. Now, compare the number of people whose lives will be lost or saved by using Vac­ cines A or C. Similarly, compare the number of people whose lives will be lost or saved by using Vaccines B or D. In both cases, the expected value is identical. Our predilection fo
risk aversion versus risk seeking leads us to quite different choices based on the way in which a decision is framed, even when the actual outcomes of the choices are the same. Another judgment phenomenon is illusory correlation, in which we tend to see particular events or particular attributes and categories as going together because we are predisposed to do so (Hamilton & Lickel, 2000). In the case of events, we may see


I

Judgment and Decision Making

49 5

spurious cause-effect relationships. In the case of attributes, we may use personal preju­ dices to form and use stereotypes (perhaps as a result of using the representativeness heuristic). For example, suppose we expect people of a given political party to show particular intellectual or moral characteristics. The instances in which people show those characteristics are more likely to be available in memory and recalled more easily than are instances that contradict our biased expectations. In other words, we perceive a correlation between the political party and the particular characteristics. Illusory correlation even may influence psychiatric diagnoses based on projective tests such as the Rorschach and the Draw-a-Person tests (Chapman & Chapman, 1 967, 1 969, 1975 ) . Researchers suggested a false correlation in which particular re­ sponses would be associated with particular diagnoses. For example, they suggested that people diagnosed with paranoia tend to draw people with large eyes more than do people with other diagnoses. In fact, diagnoses of paranoia were no more likely to be linked to depictions of large eyes than were any other diagnoses. However, what happened when individuals expected to observe a correlation between the particular responses and the associated diagnoses? They tended to see the illusory correlation, although no actual correlation existed. Another common error is overconfidence-an individual's overvaluation of her or his own skills, knowledge, or judgment. For example, people answered 200 two­ alternative statements, such as "Absinthe is (a) a liqueur, (b) a precious stone." (Ab­ sinthe is a licorice-fl avored liqueur. ) People were asked to choose the correct answer and to state the probability that their answer was correct (Fischhoff, Slovic, & Lich­ tenstein, 1 977). People were overconfident. For example, when people were 1 00% confident in their answers, they were right only 80% of the time. In general, people tend to overestimate the accuracy of their judgments (Kahneman & Tversky, 1 996) . Why are people overconfident? One reason i s that people may not realize how little they know. A second reason is that they may realize what they are assuming when they call on the knowledge they have. A third reason may be their ignorance of the fact that their information comes from unreliable sources (Carlson, 1 995 ; Griffin & Tversky, 1 992). Because of overconfidence, people often make poor decisions. These decisions are based on inadequate information and ineffective decision-making strategies. Why we tend to be overconfident in our judgments is not clear. One simple explanation is that we prefer not to think about being wrong ( Fischhoff, 1 988). An error in judgment that is quite common in people's thinking is the sunk-cost fallacy (Dupuy, 1 998, 1 999; Nozick, 1 990) . This is the decision to conrinue to invest in something simply because one has invested in it before and one hopes to recover one's investment. For example, suppose you have bought a car. It is a lemon. You al­ ready have invested thousands of dollars in getting it fixed. Now you have another major repair on it confronting you. You have no reason to believe that this additional repair really will be the last in the string of repairs. You think how much money you have spent on repairs. You reason that you need to do the additional repair to justify the amount you already have spent on repairs. So you do the repair rather than buy a new car. You have just committed the sunk-cost fallacy. The problem is that you al­ ready have lost the money on those repairs. Throwing more money into the repairs will not get that money back. Your best bet may well be to view the money already spent on repairs as a "sunk cost." You then buy a new car. Similarly, suppose you go

Baruch Fischhoff is a professar of social and decision sciences and a professar of engineering and public policy at Carnegie­ Mellon University. He has studied psychological processes such as hindsight bias, risk per­ ception, and value elicitation . He also has done policy-making wark in areas such as risk and environmental management .


496

Chapter

12



Decision Making and Reasoning

on a vacation. You intend to stay on vacation for 2 weeks. You are having a miserable time. You are trying to decide whether to go home a week early. Should you go home a week early? You decide not to. In this way, you attempt to justify the investment you already have made in the vacation. Again, you have committed the sunk-cost fallacy. Instead of viewing the money simply as lost on an unfortunate decision, you have decided to throw more money away. But you do so without any hope that the vaca­ tion will get any better. Taking opportunity costs into account is important when judgments are made. These are the prices paid for availing oneself of certain opportunities. For example, suppose you see a great job offer in San Francisco. You always wanted to live there. You are ready to take it. Before you do, you need to ask yourself a question. What other things will you have to forego to take advantage of this opportunity? An ex­ ample might be the chance, on your budget, of having more than 500 square feet of living space. Another might be the chance to live in a place where you probably do not have to worry about earthquakes. Any time you take advantage of an opportunity, there are opportunity costs. They may, in some cases, make what looked like a good opportunity look like not such a great opportunity at all. Ideally, you should try to look at these opportunity costs in an unbiased way. Finally, a bias that often affects all of us is hindsight bias-when we look at a situation retrospectively, we believe we easily can see all the signs and events leading up to a particular outcome ( Fischhoff, 1 982; Wasserman, Lempert, & Hastie, 1991 ) . For example, suppose people are asked t o predict the outcomes of psychological ex­ periments in advance of the experiments. People rarely are able to predict the out­ comes at better than chance levels. However, when people are told of the outcomes of psychological experiments, they frequently comment that these outcomes were obvious. They say the outcomes easily would have been predicted in advance. Simi­ larly, when intimate personal relationships are in trouble, people often fail to observe signs of the difficulties until the problems reach crisis proportions. By then, it may be too late to save the relationship. In retrospect, however, people may slap their fore­ heads. They ask themselves, "Why didn't I see it coming? It was so obvious! I should have seen the signs." Much of the work on judgment and decision making has focused on the errors we make. Human rationality is limited. Still, human irrationality also is limited (Cohen, 1 98 1 ). We do act rationally in many instances. Also, each of us can improve our decision making through practice. We are most likely to do so if we obtain specific feedback regarding how to improve our decision-making strategies. Another key way to improve decision making is to gain accurate information for the calculation of probabilities. Then we must use probabilities appropriately in decision making. In addition, although subjective expected utility theory may offer a limited description of actual human decision making, it is far from useless. It offers a good prescription for enhancing the effectiveness of decision making when confronting a decision im­ portant enough to warrant the time and mental effort required (Slovic, 1 990). Fur­ thermore, we can try to avoid overconfidence in our intuitive guesses regarding opti­ mal choices. Yet another way to enhance our decision making is to use careful reasoning in drawing inferences about the various options available to us. The work on heuristics and biases shows the importance of distinguishing be­ tween intellectual competence and intellectual performance as it manifests itself in daily life. Even experts in the use of probability and statistics can find themselves


I

Judgment and Decision Making

497

falling into faulty patterns of judgment and decision making in their everyday lives. People may be intelligent in a conventional, test-based sense. Yet they may show exactly the same biases and faulty reasoning that someone with a lower test score would show. People often fail to fully utilize their intellectual competence in their daily life. There even can be a wide gap between the two. Thus, if we wish to be intel­ ligent in our daily lives and not just on tests, we have to be street smart. In particular, we must be mindful of applying our intelligence to the problems that continually confront us. Heuristics do not always lead us astray. Sometimes, they are amazingly simple ways of drawing sound conclusions. For example, a simple heuristic, take the best , can be amazingly effective in decision situations (Gigerenzer & Goldstein, 1996; Giger­ enzer, Todd, & the ABC Research Group, 1 999; Marsh, Todd, & Gigerenzer, 2004). The rule is simple. In making a decision, identify the single most important criterion to you for making that decision. For example, when you choose a new automobile, the most important factor might be good gas mileage, safety, or appearance. This heuristic would seem on its face to be inadequate. In fact, it often leads to very good decisions. It produces even better decisions, in many cases, than far more complicated heuristics. Thus, heuristics can be used for good as well as for bad decision making. Indeed, when we take people's goals into account, heuristics often are amazingly ef­ fective ( Evans & Over, 1 996) . The take-the-best heuristic belongs to a class of heuristics called fast-and-frugal heuristics (FFH). As the name implies, this class of heuristics is based on a small frac­ tion of information and decisions using the heuristics are made rapidly. These heuris­ tics set a standard of rationality that considers constraints including, time, informa­ tion, and cognitive capacity (Bennis & Pachur, 2006) . Further, these models consider the lack of optimum solutions and environments in which the decision is taking place. As a result, these heuristics provide a good description of decision making dur­ ing sports ( Bennis & Pachur, 2006) . Other researchers have noted that FFHs can form a comprehensive description of how people behave in a variety of contexts. These behaviors vary from lunch selections to how physicians decide whether to prescribe medication for depression ( Scheilbehenne, Miesler, & Todd, 2007; Smith & Gilhooly, 2006) .

Neuroscience of Decision Making
As in problem solving, the prefrontal cortex, and particularly the anterior cingulate cortex, is active during the decision-making process ( Barraclough, Conroy, & Lee, 2004; Kennerley & associates, 2006; Rogers & associates, 2004). Explorations of deci­ sion making in monkeys have noted activation in the parietal regions of the brain (Platt & Glimcher, 1 999 ) . The amount of gain associated with a decision also affects the amount of activation observed in the parietal region ( Platt & Glimcher, 1 999). Examination of decision making in drug abusers identified a number of areas in­ volved in risky decisions. Researchers found decreased activation in the left pregenual anterior cingulate cortex of drug abusers (Fishbein & associates, 2005 ) . These find­ ings suggest that, during decision making, the anterior cingulate cortex is involved in consideration of potential rewards. Another interesting effect seen in this area is observed in participants who have difficulty with a decision. In one study, participants made decisions concerning whether an item was old or new and which of two items


I

498

Chapter 12



Decision Making and Reasoning

Which city has a larger popula­ tion, San Diego or San Anto­ nio? Two-thirds of University of Chicago undergraduates got the answer right: San Diego. Then we asked German students who knew very little about San Di­ ego, and many of whom had never even heard of San Anto­ nio (Goldstein & Gigerenzer, 2002). What proportion of the German students do you think got the answer right? 100%. How can it be that people who know less about a subject get more correct answers? The answer is that the German students used the recognition heu­ ristic. For the present case, this heuristic says: If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion. Note that the American students could not use the recognition heuristic-they had heard of both cit­ ies. They had to rely on recall knowledge ( i.e., facts) rather than recognition. The recognition heuristic can only be used by people who have a sufficient degree of ignorance, that is, who recognize only some objects­ but not all. In such cases, the less-is-more effect can result; that is, less knowledge can lead to more accurate judgments. Similar surprising results have been ob­ tained in predicting the outcome of British soccer games (e.g., Manchester United versus Shrewsburry Town) by people in England as opposed to Turkey. The recognition heuristic is also used in the supermarket when customers must choose among several similar products, preferring one whose name they have heard of. This heuristic is exploited by advertisements, like those of Bennetton, which give no information about the product but simply try to increase name recogni­ tion. Finally, the recognition heuristic has also been successful on the stock market, where it managed to outperform major mutual funds and the Dow in picking stock investments (Borges & associates, 1999). The recognition heuristic does not always apply, however, nor can it always make correct inferences. The effectiveness of the apparently simplistic heuris­ tic depends on its ecological rationality: its ability to exploit the structure of the information in natural

environments. The heuristic is successful when ignorance-specifically, a lack of recognition-is sys­ tematically rather than randomly distributed, that is, when it is strongly correlated with the criterion. Ex­ perimental studies indicate that 90% or more of the participants rely on the recognition heuristic in situ­ ations in which it is ecologically rational. In the Center for Adaptive Behavior and Cogni­ tion (ABC) at the Max Planck Institute for Human Development, we study not only this heuristic but a whole adaptive toolbox of heuristics. Part of the fun in the lab springs from the interdisciplinary nature of the ABC group. Psychologists collaborate with econ­ omists, mathematicians, computer scientists, and evolutionary biologists, among others. Using multiple methods, we attempt to open the adaptive toolbox. The adaptive toolbox is, in two respects, a Dar­ winian metaphor for decision making. First, evolu­ tion does not follow a grand plan, but results in a patchwork of solutions for specific problems. The same goes for the toolbox: Its heuristics are domain specific, not general. Second, the heuristics in the adaptive toolbox are not good or bad, rational or ir­ rational, per se, only relative to an environment, just as adaptations are context-bound. In these two re­ strictions lie their potential: Heuristics can perform astonishingly well when used in a suitable environ­ ment. The rationality of the adaptive toolbox is not logical, but rather ecological. The ABC program aims at providing the build­ ing blocks, or, if you like, the ABC's of cognitive heuristics for choice, categorization, inference, esti­ mation, preference, and other tasks. These heuristics are fast because they involve little computation, fru­ gal because they search only for little information, and robust because their simplicity makes it likely they can be generalized effectively to new environ­ ments. Herbert Simon once introduced the metaphor of a pair of scissors to exemplify what we call ecologi­ cal rationality. One blade is the mind, the other the environment. To understand cognition, we study the match between the structure of cognitive heuristics and the environment. Studying one blade alone, as much of cognitive science today does, will not reveal why and how cognition works.


Deductive Reasoning

499

was larger ( Fleck & associates, 2006 ) . Decisions that were rated lowest in confidence and that took the most time to answer were associated with higher activation of the anterior cingulate cortex. These findings suggest that this area of the brain is involved in the comparison and weighing of possible solutions.
Reasoning Judgment and decision making involve evaluating opportunities and selecting one choice over another. A related kind of thinking is reasoning. Reasoning is the process of drawing conclusions from principles and from evidence ( Leighton, 2004a, 2004b; Leighton & Sternberg, 2004; Sternberg, 2004; Wason & Johnson-Laird, 1972). In reasoning, we move from what is already known to infer a new conclusion or to evaluate a proposed conclusion. Reasoning is often divided into two types: deductive and inductive reasoning. De­ ductive reasoning is the process of reasoning from one or more general statements re­ garding what is known to reach a logically certain conclusion (Johnson-Laird, 2000; Rips, 1 999; Williams, 2000). lt often involves reasoning from one or more general state­ ments regarding what is known to a specific application of the general statement. In contrast, inductive reasoning is the process of reasoning from specific facts or observa­ tions to reach a likely conclusion that may explain the facts. The inductive reasoner then may use that probable conclusion to attempt to predict future specific instances (Johnson-Laird, 2000) . The key feature distinguishing inductive from deductive reason­ ing is that in inductive reasoning, we never can reach a logically certain conclusion. We only can reach a particularly well-founded or probable conclusion.

Deductive Reasoning
Deductive reasoning is based on logical propositions. A proposition is basically an asser­ tion, which may be either true or false. Examples are "Cognitive psychology students are brilliant," "cognitive psychology students wear shoes," or "cognitive psychology students like peanut butter." In a logical argument, premises are propositions about which argu­ ments are made. Cognitive psychologists are interested particularly in propositions that may be connected in ways that require people to draw reasoned conclusions. That is, deductive reasoning is useful because it helps people connect various propositions to draw conclusions. Cognitive psychologists want to know how people connect proposi­ tions to draw conclusions. Some of these conclusions are well reasoned. Others are not. Much of the difficulty of reasoning is in even understanding the language of problems (Girotto, 2004). Some of the mental processes used in language understanding, and the cerebral functioning underlying them, are used in reasoning too (Lawson, 2004).

Conditional Reasoning
One of the primary types of deductive reasoning is conditional reasoning, in which the reasoner must draw a conclusion based on an if-then proposition. The conditional if­ then proposition states that if antecedent condition p is met, then consequent event q follows. For example, "If students study hard, then they score high on their exams." Under some circumstances, if you have established a conditional proposition, then you


I

5 00

Chapter 12
TABLE 1 2.2



Decision Making and Reasoning
Cond itional Reasoning: Deductively Valid Inferences and Deductive Fallacies

Two kinds of conditional propositions lead to valid deductions, and two others lead to deductive fallacies.
TYPE OF ARGUMENT CONDITIONAL PROPOSITION EXISTING CONDITION
P

INFERENCE

Modus ponens

p -? q If you are a mother, then you have a child.

You are a mother.

:. q Therefore, you have a child.

Deductively valid inferences
Modus tollens

p -? q If you are a mother, then you have a child. p -? q If you are a mother, then you have a child. p -? q If you are a mother, then you have a child.

-, q You do not have a child. -' p You are not a mother.

:. -, p Therefore, you are not a mother. :. -, q Therefore, you do not have a child. :. p Therefore, you are a mother.

Denying the antecedent Deductive fallacies Affirming the consequent

q You have a child.

may draw a well-reasoned conclusion. The usual set of conditional propositions from which you can draw a well-reasoned conclusion is "If p, then q . p . Therefore, q . " This inference illustrates deductive validity. That is, it follows logically from the propositions on which it is based. The following is also logical: "If students eat pizza, then they score high on their exams. They eat pizza. Therefore, they score high on their exams." As you may have guessed, deductive validity does not equate with truth. You can reach deduc­ tively valid conclusions that are completely untrue with respect to the world. Whether the conclusion is true depends on the truthfulness of the premises. In fact, people are more likely mistakenly to accept an illogical argument as logical if the conclusion is factually true. For now, however, we put aside the issue of truth and focus only on the deductive validity, or logical soundness, of the reasoning. One set of propositions and its conclusion is the argument "If p , then q. p . There­ fore, q , " which is termed a modus ponens argument. In the modus ponens argument, the reasoner affirms the antecedent. For example, take the argument "If you are a husband, then you are married. Harrison is a husband. Therefore, he is married." The set of propositions for the modus ponens argument is shown in Table 1 2.2. In addition to the modus ponens argument, you may draw another well-reasoned conclusion from a conditional proposition, given a different second proposition: "If p, then q . Not q . Therefore, not p . " This inference i s also deductively valid. This particular set of propositions and its conclusion is termed a modus tollens argument, in which the rea­ soner denies the consequent. For example, we modify the second proposition of the argument to deny the consequent: "If you are a husband, then you are married. Har­ rison is not married. Therefore, he is not a husband." Table 1 2 .2 shows two conditions


Deductive Reasoning

501

i n which a well-reasoned conclusion can be reached. It also shows two conditions in which such a conclusion cannot be reached. As the examples illustrate, some infer­ ences based on conditional reasoning are fallacies. They lead to conclusions that are not deductively valid. When using conditional propositions, we cannot reach a de­ ductively valid conclusion based either on denying the antecedent condition or on affirming the consequent. Let's return to the proposition, "If you are a husband, then you are married." We would not be able to confirm or to refute the proposition based on denying the antecedent: "Joan is not a husband. Therefore, she is not married." Even if we ascertain that Joan is not a husband, we cannot conclude that she is not married. Similarly, we cannot deduce a valid conclusion by affirming the consequent: "Joan is married. Therefore, she is a husband." Even if Joan is married, her spouse may not consider her a husband. Conditional reasoning can be studied in the laboratory using a "selection task" (Wason, 1968, 1 969, 1 983; Wason & Johnson-Laird, 1970, 1 9 7 2 ). Participants are presented with a set of four two-sided cards. Each card has a numeral on one side and a letter on the other side. Face up are two letters and two numerals. The letters are a consonant and a vowel. The numbers are an even number and an odd number. For example, participants may be faced with the following series of cards: S 3 A 2. Each participant then is told a conditional statement. An example would be "If a card has a consonant on one side, then it has an even number on the other side." The task is to determine whether the conditional statement is true or false. One does so by turn­ ing over the exact number of cards necessary to test the conditional statement. That is, the participant must not turn over any cards that are not valid tests of the state­ ment. But the participant must turn over all cards that are valid tests of the condi­ tional proposition. Table 1 2.3 illustrates the four possible tests participants might perform on the cards. Two of the tests (affirming the antecedent and denying the consequent) are both necessary and sufficient for testing the conditional statement. That is, to evalu­ ate the deduction, the participant must turn over the card showing a consonant to see whether it has an even number on the other side. He or she thereby affirms the antecedent ( the modus ponens argument). In addition, the participant must turn over the card showing an odd number {i.e., not an even number} to see whether it has a vowel { i.e., not a consonant} on the other side. He or she thereby denies the conse­ quent (the modus tollens argument). The other two possible tests (denying the ante­ cedent and affirming the consequent) are irrelevant. That is, the participant need not turn over the card showing a vowel (i.e., not a consonant). To do so would be to deny the antecedent. He or she also need not turn over the card showing an even number ( i.e., not a odd number) . To do so would be to affirm the consequent. Most partici­ pants knew to test for the modus ponens argument. However, many participants failed to test for the modus tollens argument. Some of these participants instead tried to deny the antecedent as a means of testing the conditional proposition. Most people of all ages (at least starting in elementary school) appear to have little difficulty in recognizing and applying the modus ponens argument. However, few people spontaneously recognize the need for reasoning by means of the modus tollens argument. Many people do not recognize the logical fallacies of denying the antece­ dent or affirming the consequent, at least as these fallacies are applied to abstract rea­ soning problems (Braine & O'Brien, 1 99 1 ; O'Brien, 2004; Rips, 1988, 1 994; Rumain,


I

502

Chapter

12



Decision Making and Reasoning
Cond itional Reasoning: Wason's Selection Task

TABLE 1 2.3

In the Wason selection task, Peter Wason presented participants with a set of four cards, from which the participants were to test the validity of a given proposition. This table illustrates how a reasoner might test the conditional proposition (p
q) , "If a card has a consonant on one side (p) , then it has an even number on the other side (q) . "
PROPOSITION BASED ON WHAT SHOWS ON THE FACE OF THE CARD
P

TEST

TYPE OF REASONING

A given card has a consonant on one side (e.g., "S," "F," "Y," or "P") -. q A given card does not have an even number on one side. That is, a given card has an odd number on one side (e.g., "3," "5," "7," or "9"). -. p A given card does not have a consonant on one side. That is, a given card has a vowel on one side (e.g., "A," HE," "I," or "0").
q

:. q Does the card have an even number on the other side? :. -. p Does the card not have a consonant on the other side ? That is, does the card have a vowel on the other side? :.-. q Does the card not have an even number on the other side? That is, does the card have an odd number on the other side ?

Based on modus
ponens

Based on modus
tollens

Deductively valid inferences

Based on denying the antecedent Deductive fallacies

A given card has an even number on one side (e.g., "2," "4," "6," or "8").

:. p Does the card have a consonant on the other side?

Based on affirming the consequent

Connell, & Braine, 1 983 ) . In fact, some evidence suggests that even people who have taken a I:ourse in logic fail to demonstrate deductive reasoning across various situa­ tions (Cheng & associates, 1 986 ) . Even training aimed directly at improving reason­ ing leads to mixed results. After training aimed at increasing reasoning, there is a significant increase in the use of mental models and rules (Leighton, 2006) . However, after this training, there may be only a moderate increase in the use of deductive reasoning (Leighton, 2006) . Most people do demonstrate conditional reasoning un­ der two kinds of circumstances. The first is conditions that minimize possible linguis­ tic ambiguities. The second is conditions that activate schemas that provide a mean­ ingful context for the reasoning. Why might both children and adults fallaciously affirm the consequent or deny the antecedent? Perhaps they do because of invited inferences that follow from nor­ mal discourse comprehension of conditional phrasing ( Rumain, Connell, & Braine, 1 983 ) . For instance, suppose that my publisher advertises, "If you buy this textbook, then we will give you a $5 rebate." Consider everyday situations. You probably cor­ rectly infer that if you do not buy this textbook, the publisher will not give you a $5 rebate. However, formal deductive reasoning would consider this denial of the ante-


Deductive Reasoning

503

cedent to be fallacious. The statement says nothing about what happens if you do not buy the textbook. Similarly, you may infer that you must have bought this textbook (affirm the consequent) if you received a $5 rebate from the publisher. But the state­ ment says nothing about the range of circumstances that lead you to receive the $5 rebate. There may be other ways to receive it. Both inferences are fallacious according to formal deductive reasoning, but both are quite reasonable invited inferences in everyday situations. It helps when the wording of conditional-reasoning problems either explicitly or implicitly disinvites these inferences. Both adults and children are then much less likely to engage in these logical fallacies. The demonstration of conditional reasoning also is influenced by the presence of contextual information that converts the problem from one of abstract deductive reasoning to one that applies to an everyday situation. For example, participants re­ ceived both the Wason selection task and a modified version of the Wason selection task (Griggs & Cox, 1 982). In the modified version, the participants were asked to suppose that they were police officers. As officers, they were attempting to enforce the laws applying to the legal age for drinking alcoholic beverages. The particular rule to be enforced was "If a person is drinking beer, then the person must be over 1 9 years of age." Each participant was presented with a set of four cards: ( 1 ) "drinking a beer," ( 2 ) "drinking a Coke," ( 3 ) " 1 6 years of age," and (4) "22 years of age." The participant then was instructed to "Select the card or cards that you definitely need to turn over to determine whether or not the people are violating the rule" (p. 4 1 4 ) . On the one hand, none of Griggs and Cox's participants had responded correctly on the abstract version of the Wason selection task. On the other hand, a remarkable 72% of the participants correctly responded to the modified version of the task. A more recent modification of this task has shown that beliefs regarding plausi­ bility influence whether people choose the modus tollens argument. This is one's deny­ ing the consequent by checking to see whether a person who is not older than 1 9 years of age is not drinking beer. Specifically, people are far more likely to try to deny the consequent when the test involves checking to see whether an 18 year old is drinking beer than checking to see whether a 4 year old is drinking beer. Neverthe­ less, the logical argument is the same in both cases (Kirby, 1 994). How do people use deductive reasoning in realistic situations? Two investigators have suggested that, rather than using formal inference rules, people often use pragmatic reasoning sche­ mas (Cheng & Holyoak, 1 985 ) . Pragmatic reasoning schemas are general organizing principles or rules related to particular kinds of goals, such as permissions, obligations, or causations. These schemas sometimes are referred to as pragmatic rules . These prag­ matic rules are not as abstract as formal logical rules. Yet they are sufficiently general and broad so that they can apply to a wide variety of specific situations. Prior beliefs, in other words, matter in reasoning (Evans & Feeney, 2004) . Alternatively, one's performance may be affected by perspective effects that is, whether one takes the point of view of the police officers or of the people drinking the alcoholic beverages (Almor & Sloman, 1 996; Staller, Sloman, & Ben-Zeev, 2000) . So it may not be permissions per se that matter. Rather, what may matter are the perspectives one takes when solving such problems. Thus, consider situations in which our previous experiences or our existing knowledge cannot tell us all we want to know. Pragmatic reasoning schemas help us deduce what might reasonably be true. Particular situations or contexts activate par-


I

504

Chapter 12



Decision Making and Reasoning

ticular schemas. For example, suppose that you are walking across campus. You see someone who looks extremely young. Then you see the person walk to a car. He un­ locks it, gets in, and drives away. This observation would activate your permission schema for driving. "If you are to be permitted to drive alone, then you must be at least 16 years old." You might now deduce that the person you saw is at least 1 6 years old. In one experiment, 62% of participants correctly chose modus ponens and modus tollens arguments but not the two logical fallacies when the conditional-reasoning task was presented in the context of permission statements. Only 1 1 % did so when the task was presented in the context of arbitrary statements unrelated to pragmatic reasoning schemas (Cheng & Holyoak, 1 985 ) . Researchers conducted an extensive analysis comparing the standard abstract Wason selection task with an abstract form of a permission problem (Griggs & Cox, 1 993 ) . The standard abstract form might be "If a card has an '
on one side, then it must have a '4' on the other side. The abstract permission form might be "If one is to take action 'A,' then one must first satisfy precondition 'P.' " Performance on the ab­ stract permission task was still superior (49% correct overall) to performance on the standard abstract task (only 9% correct overall) . This was so even when the authors added to the standard abstract task a statement that framed the task in a checking context. An example would be "Suppose you are an authority checking whether or not certain rules are being followed." The permission form was still better if a rule­ clarification statement was added. An example of this would be "In other words, in order to have an ' on one side, a card must first have a '4' on the other side." And the permission form was better even for explicit negations. For example, "NOT A" and "NOT 4" would be used instead of implicit negations for "A" and "4"-namely, "B" and "7." Thus, although both the standard selection task and the permission­ related task involve deductive reasoning, the two tasks actually appear to pose differ­ ent problems (Griggs & Cox, 1 993 ; Manktelow & Over, 1 990, 1992). Pragmatic reasoning schemas do not, therefore, fully explain all aspects of conditional reasoning (Braine & O'Brien, 1 99 1 ; Braine, Reiser, & Rumain, 1 984; Rips, 1 983 , 1 988, 1 994) . Indeed, people do not always use rules of reasoning at all (Garcia-Madruga & associ­ ates, 2000; Johnson-Laird & Savary, 1 999; Smith, Langston, & N isbett, 1 992). An altogether different approach to conditional reasoning takes an evolutionary view of cognition (Cummins, 2004) . According to this view, we should consider what kinds of thinking skills would provide a naturally selective advantage for humans in adapting to our environment across evolutionary time (Cosmides, 1 989; Cosmides & Tooby, 1 996). To gain insight into human cognition, we should look to see what kinds of adaptations would have been most useful in the distant past. So we hypothesize on how human hunters and gatherers would have thought during the millions of years of evolutionary time that predated the relatively recent development of agriculture and the very recent development of industrialized societies. How has evolution influenced human cognition? Humans may possess something like a schema-acquisition device ( Cosmides, 1 989) . According to Cosmides, it facili­ tates our ability to quickly glean important information from our experiences. It also helps us to organize that information into meaningful frameworks. In her view, these schemas are highly flexible. But they also are specialized for selecting and organizing the information that will most effectively aid us in adapting to the situations we face. According to Cosmides, one of the distinctive adaptations shown by human hunters


Deductive Reasoning

505

and gatherers has been in the area of social exchange. Hence, evolutionary develop­ ment of human cognition should facilitate the acquisition of schemas related to social exchange. According to Cosmides, there are two kinds of inferences in particular that social-exchange schemas facilitate. One kind is inferences related to cost-benefit re­ lationships. The other kind is inferences that help people detect when someone is cheating in a particular social exchange. Across nine experiments, participants dem­ onstrated deductive reasoning that confirmed the predictions of social-exchange theory, rather than predictions based on permissions-related schemas or on abstract deductive-reasoning principles (Cosmides, 1 989) .

Syllogistic Reasoning
In addition to conditional reasoning, the other key type of deductive reasoning is syllogistic reasoning, which is based on the use of syllogisms. Syllogisms are deductive arguments that involve drawing conclusions from two premises (Maxwell, 2005; Rips, 1 994, 1 999 ) . All syllogisms comprise a major premise, a minor premise, and a conclu­ sion. Unfortunately, sometimes the conclusion may be that no logical conclusion may be reached based on the two given premises.

Linear Syllogisms
In a syllogism, each of the two premises describes a particular relationship between two items and at least one of the items is common to both premises. The items may be objects, categories, attributes, or almost anything else that can be related to some­ thing. Logicians designate the first term of the major premise as the subject. The common term is the middle term (which is used once in each premise). The second term of the minor premise is the predicate. In a linear syllogism, the relationship among the terms is linear. It involves a quantitative or qualitative comparison. Each term shows either more or less of a par­ ticular attribute or quantity. Suppose, for example, that you are presented with the problem in the "Investigating Cognitive Psychology" box. You are smarter than your best friend. Your best friend is smarter than your roommate. Which of you is the smartest?

Each of the two premises describes a linear relationship between two items; Table 1 2.4 shows the terms of each premise and the relationship of the terms in each prem­ ise. The deductive-reasoning task for the linear syllogism is to determine a relation­ ship between two items that do not appear in the same premise. In the preceding linear syllogism, the problem solver needs to infer that you are smarter than your roommate to realize that you are the smartest of the three. When the linear syllogism is deductively valid, its conclusion follows logically from the premises. We correctly can deduce with complete certainty that you are the


506

Chapter

12



Decision Making and Reasoning
linear Syl log isms

TABLE 1 2.4

What logical deduction can you reach based on the premises of this linear syllogism? Is deductive valid­ ity the same as truth?
FIRST TERM (hEM) LINEAR RELATIONSHIP SECOND TERM (ITEM)

Premise A Premise B Conclusion: Who is smartest?

You Your best friend

are smarter than is smarter than is/are the smartest of the three.

your best friend. your roommate.

smartest of the three. Your roommate or your best friend may, however, point out an area of weakness in your conclusion. Even a conclusion that is deductively valid may not be objectively true. Of course, it is true in this example. How do people solve linear syllogisms? Several different theories have been pro­ posed. Some investigators have suggested that linear syllogisms are solved spatially, through mental representations of linear continua ( DeSoto, London, & Handel, 1 965; Huttenlocher, 1 968) . The idea here is that people imagine a visual representa­ tion laying out the terms on a linear continuum. For example, the premise "You are smarter than your roommate" might be represented mentally as an image of a vertical continuum. Your name is above your roommate's. The linear continuum usually is visualized vertically, although it can be visualized horizontally. When answering the question, people consult this continuum and choose the item in the correct place along the continuum. Other investigators have proposed that people solve linear syllogisms using a se­ mantic model involving propositional representations (Clark, 1 969) . For example, the premise "You are smarter than your roommate" might be represented as [smarter (you, your roommate)]. According to this view, people do not use images at all but rather combine semantic propositions. A third view is that people use a combination of spatial and propositional repre­ sentations in solving the syllogisms (Sternberg, 1 980) . According to this view, people use propositions initially to represent each of the premises. They then form mental images based on the contents of these propositions. Model testing has tended to sup­ port the combination (or mixture) model over exclusively propositional or exclu­ sively spatial representations ( Sternberg, 1 980). None of the three models appear to be quite right, however. They all repre­ sent performance averaged over many individuals. Rather, there seem to be indi­ vidual differences in strategies, in which some people tend to use a more imaginal strategy and others tend to use a more propositional strategy ( Sternberg & Weil, 1 980 ) . This result points out an important limitation of many psychological find­ ings: Unless we consider each individual separately, we risk j umping to conclu­ sions based on a group average that does not necessarily apply to each person individually ( see Siegler, 1 988 ) . Whereas most people may use a combination strategy, not everyone does. The only way to find out which each person uses is to examine each individual.


Deductive Reasoning Categorical Syllogisms Probably the most well-known kind of syllogism is the categorical syllogism. Like other kinds of syllogisms, categorical syllogisms comprise two premises and a conclu­ s ion. In the case of the categorical syllogism, the premises state something about the category memberships of the terms. In fact, each term represents all, none, or some of the members of a particular class or category. As with other syllogisms, each premise contains two terms. One of them must be the middle term, common to both premises. The first and the second terms in each premise are linked through the categorical membership of the terms. That is, one term is a member of the class indicated by the other term. However the premises are worded, they state that some ( or all or none) of the members of the category of the first term are (or are not) members of the cat­ egory of the second term. To determine whether the conclusion follows logically from the premises, the reasoner must determine the category memberships of the terms. An example of a categorical syllogism would be as follows:

507

All cognitive psychologists are pianists. All pianists are athletes. Therefore, all cognitive psychologists are athletes. Logicians often use circle diagrams to illustrate class membership. They make it easier to figure out whether a particular conclusion is logically sound. The conclusion for this syllogism does in fact follow logically from the premises. This is shown in the circle diagram in Figure 1 2 . 1 . However, the conclusion is false because the premises are false. For the preceding categorical syllogism, the subject is cognitive psycholo­ gists, the middle term is pianists, and the predicate is athletes. In both premises, we asserted that all members of the category of the first term were members of the cate­ gory of the second term. Statements of the form "All A are B" sometimes are referred to as universal affir­ matives because they make a positive (affirmative) statement about all members of a class (universal ) . In addition, there are three other kinds of possible statements in a categorical syllogism. One kind comprises universal negative statements (e.g., "No cog­ nitive psychologists are flutists"). A second kind is particular affirmative statements (e.g., "Some cognitive psychologists are left-handed"). The last kind is particular negative statements (e.g., "Some cognitive psychologists are not physicists"). These are summarized in Table 1 2 . 5 . I n all kinds of syllogisms, some combinations of premises lead to n o logically valid conclusion. In categorical syllogisms, in particular, we cannot draw logically valid con­ clusions from categorical syllogisms with two particular premises or with two negative premises. For example, "Some cognitive psychologists are left-handed. Some left­ handed people are smart." Based on these premises, you cannot conclude even that some cognitive psychologists are smart. The left-handed people who are smart might not be the same left-handed people who are cognitive psychologists. We just don't > know. Consider a negative example: "No students are stupid. No stupid people eat pizza." We cannot conclude anything one way or the other about whether students eat pizza based on these two negative premises. As you may have guessed, people appear to have more difficulty (work more slowly and make more errors) when trying to deduce conclusions based on one or more particular premises or negative premises.


508

Chapter

12



Decision Making and Reasoning

1Ii'1'W'Fili

Circle diagrams may be used to represent categorical syllogisms such

as

the one shown here : "All pianists are athletes. All cognitive psy­

chologists are pianists. Therefore , all cognitive psychologists are athletes . " From

In Search of the Human Mind by Robert ].

S ternberg.

Copyright © 1 995 by Harcourt Brace & Company . Reproduced by permission of the publisher.

Various theories have been proposed as to how people solve categorical syllo­ gisms. One of the earliest theories was the atmosphere bias ( Begg & Denny, 1 969; Woodworth & Sells, 1 935 ) . There are two basic ideas of this theory. The first is that if there is at least one negative in the premises, people will prefer a negative solution. The second is that if there is at least one particular in the premises, people will prefer a particular solution. For example, if one of the premises is "No pilots are children," people will prefer a solution that has the word no in it. Nonetheless, it does not ac­ count very well for large numbers of responses. Other researchers focused attention on the conversion of premises (Chapman & Chapman, 1959). Here, the terms of a given premise are reversed. People sometimes believe that the reversed form of the premise is just as valid as the original form. The idea here is that people tend to convert statements like "If A, then B" into "If B, then


Deductive Reasoning
TABLE 1 2.5
Categorical Syllog isms: Types of Premises

509

The premises of categorical syllogisms may be universal affirmatives, universal negatives, particular affir­ matives, or particular negatives.
FORM OF PREMISE STATEMENTS

TYPE OF PREMISE

DESCRIPTION

EXAMPLES

REVERSIBILITY·

Universal affirmative

All A are B.

The premise positively (affirmatively) states that all members of the first class (universal) are members of the second class. The premise states that none of the members of the first class are members of the second class. The premise states that only some of the members of the first class are members of the second class. The premise states that some members of the first class are not members of the second class.

All men are males.

All males are men.
Nonreversible

*"

All A are B. All B are A. No men are females.
or

Universal negative

No A are B. (Alternative: All A are
not

B.)

All men are not females. Some females are women.

No men are females No females are men. HReversibleH No A are B No B are A.
=

=

Particular affirmative

Some A are B.

Some females are women. Some women are females.
Nonreversible

*"

Some A are B. Some B are A. Some women are not females.
Nonreversible

*" *"

Particular negative

Some A are not B.

Some females are not women. Some A are not B. Some B are not A.
*"

*In formal logic, the word

some

means "some and possibly all." In common parlance, and as used in cognitive psychology,

some

means "some

and not all." Thus, in formal logic, the particular affirmative also would be reversible. For our purposes, it is not.

A." They do not realize that the statements are not equivalent. These errors are made by children and adults alike (Markovits, 2004) . A more widely accepted theory i s based on the notion that people solve syllo­ gisms by using a semantic (meaning-based) process based on mental models ( Espino & associates, 2005; Johnson-Laird, 1 997; Johnson-Laird & associates, 1 999; Johnson­ Laird, Byrne, & Schaeken, 1 992; Johnson-Laird & Savary, 1 999; Johnson-Laird & Steedman, 1 978). This view of reasoning as involving semantic processes based on mental models may be contrasted with rule-based ( "syntactic") processes, such as those characterized by formal logic. A mental model is an internal representation of information that corresponds analogously with whatever is being represented ( see Johnson-Laird, 1 983 ) . Some mental models are more likely to lead to a deductively valid conclusion than are others. In particular, some mental models may not be effec­ tive in disconfirming an invalid conclusion. For example, in the Johnson-Laird study, participants were asked to describe their conclusions and their mental models for the syllogism, "All of the artists are beekeep-


510

Chapter

12



Decision Making and Reasoning

Philip Johnson-Laird is a pro­ fessor of psychology at Prince­ ton University. He is best known for his work on mental models, deductive reasoning, and creativity. In particular, Johnson-Laird has shown how the concept of mental models can be applied toward un­ derstanding a wide variety of psychological processes.

ers. Some of the beekeepers are clever." One participant said, "I thought of all the little . . . artists in the room and imagined they all had beekeeper's hats on" (J ohnson­ Laird & Steedman, 1978, p. 7 7 ) . Figure 1 2.2 shows two different mental models for this syllogism. As the figure shows, the choice of a mental model may affect the rea­ soner's ability to reach a valid deductive conclusion. Because some models are better than others for solving some syllogisms, a person is more likely to reach a deductively valid conclusion by using more than one mental model. In the figure, the mental model shown in part a may lead to the deductively in­ valid conclusion that some artists are clever. By observing the alternative model in part b, we can see an alternative view of the syllogism. It shows that the conclusion that some artists are clever may not be deduced on the basis of this information alone. Specifically, perhaps the beekeepers who are clever are not the same as the beekeepers who are artists. Two types of representations of syllogisms are often used by logicians. As men­ tioned previously, circle diagrams are often used to represent categorical syllogisms. In circle diagrams, you can use overlapping, concentric, or nonoverlapping circles to represent the members of different categories (see Figures 1 2 . 1 and 1 2. 2 ) . An alterna­ tive representation often used by logicians is a truth table. It can be used to represent the truth value of various combinations of propositions, based on the truth value of each of the component propositions. People can learn how to improve their reason­ ing by being taught how to circle diagrams or truth tables (Nickerson, 2004 ) . According t o this view, the difficulty of many problems of deductive reasoning relates to the number of mental models needed for adequately representing the prem­ ises of the deductive argument (Johnson-Laird, Byrne, & Schaeken, 1 992 ) . Argu­ ments that entail only one mental model may be solved quickly and accurately. However, to infer accurate conclusions based on arguments that may be represented by multiple alternative models is much harder. Such inferences place great demands on working memory (Gilhooly, 2004) . In these cases, the individual must simultane­ ously hold in working memory each of the various models. Only in this way can he or she reach or evaluate a conclusion. Thus, limitations of working-memory capacity may underlie at least some of the errors observed in human deductive reasoning (Johnson-Laird, Byrne, & Schaeken, 1 992). In two experiments, the role of working memory was studied in syllogistic reasoning (Gilhooly & associates, 1993 ). In the first, syllogisms were simply presented either orally or visually. Oral presentation placed a considerably higher load on working memory because participants had to remember the premises. In the visual-presentation condition, participants could look at the premises. As predicted, performance was lower in the oral-presentation condition. In a second experiment, participants needed to solve syllogisms while at the same time performing another task. Either the task drew on working-memory resources or it did not. The researchers found that the task that drew on working-memory resources interfered with syllogistic reasoning. The task that did not draw on these resources did not. As children get older, their effective use of working memory increases. So their ability to use mental models that draw on working-memory resources also increases. Researchers asked children of different ages to look at cards with objects displayed on them, such as shirts and trousers of different colors ( Barrouillet & Lecas, 1998 ) . Then they were given premises, such as "If you wear a white shirt, you wear green trousers."


Deductive Reasoning

511

(a)

(b)

Philip Johnson-Laird and Mark Steedman hypothesized that people use various mental models analogously to represent the items within a syllogism. Some mental models are more eff ective than others , and for a valid deductive conclusion to be reached, more than one model may be necessary ,
as

shown here . (See text for explanation. )

They were asked to indicate which combinations of cards were consistent with the premises. For example, cards with white shirts and green trousers would be consistent with this premise, but so would shirts and trousers of other colors because the state­ ment also implies that if you do not wear green trousers, you do not wear a white shirt. Older children, who were able to hold more combinations in working memory, chose more different card combinations to represent the premises. Other factors also may contribute to the ease of forming appropriate mental mod­ els. People seem to solve logical problems more accurately and more easily when the terms have high imagery value (Clement & Falmagne, 1986). This situation probably facilitates their mental representation. Similarly, when propositions showed high re­ latedness in terms of mental images, participants could more easily and accurately solve the problems and judge the accuracy of the conclusions. An example would be


I

512

Chapter

12



Decision Making and Reasoning

one premise about dogs and the other about cats, rather than one about dogs and the other about tables. For example, it would be relatively easy to solve a high-imagery, high-relatedness syllogism, such as "Some artists are painters. Some painters use black paint." It would be relatively hard to solve a low-imagery, low-relatedness syllogism, such as "Some texts are prose. Some prose is well-written." High imagery value and high relatedness may make it easier for reasoners to come up with counterexamples that reveal an argument to be deductively invalid (Clement & Falmagne, 1 986). Some deductive-reasoning problems comprise more than two premises. For ex­ ample, transitive-inference problems, in which problem solvers must order multiple terms, can have any number of premises linking large numbers of terms. Mathemati­ cal and logical proofs are deductive in character and can have many steps as well.

Further Aids and Obstacles to Deductive Reasoning
In deductive reasoning, as in many other cognitive processes, we engage in many heuristic shortcuts. They sometimes lead to inaccurate conclusions. In addition to these shortcuts, we often are influenced by biases that distort the outcomes of our reasoning. Heuristics in syllogistic reasoning include overextension errors . In these errors, we overextend the use of strategies that work in some syllogisms to syllogisms in which the strategies fail us. For example, although reversals work well with univer­ sal negatives, they do not work with other kinds of premises. We also experience foreclosure effects when we fail to consider all the possibilities before reaching a conclusion. For example, we may fail to think of contrary examples when inferring conclusions from particular or negative premises. In addition, premise-phrasing ef­ fects may influence our deductive reasoning. Examples would be the sequence of terms or the use of particular qualifiers or negative phrasing. Premise-phrasing ef­ fects may lead us to leap to a conclusion without adequately reflecting on the de­ ductive validity of the syllogism. Biases that affect deductive reasoning generally relate to the content of the prem­ ises and the believability of the conclusion. They also reflect the tendency toward confirmation bias. In confirmation bias, we seek confirmation rather than disconfirma­ tion of what we already believe. Suppose the content of the premises and a conclusion seem to be true. In such cases, reasoners tend to believe in the validity of the conclu­ sion, even when the logic is flawed ( Evans, Barston, & Pollard, 1983 ) . Confirmation bias can be detrimental and even dangerous in some circumstances. For instance, in an emergency room, if a doctor assumes that a patient has condition X, the doctor may interpret the set of symptoms as supporting the diagnosis without fully considering all alternative interpretations (Pines, 2005 ) . This shortcut can result in inappropriate diagnosis and treatment, which can be extremely dangerous. Other circumstances where the effects of confirmation bias can be observed are in police investigations, paranormal beliefs, and stereotyping behavior (Ask & Granhag, 2005; Biernat & Ma, 2005; Lawrence & Peters, 2004) . To a lesser extent, people also show the opposite tendency to disconfirm the validity of the conclusion when the conclu­ sion or the content of the premises contradicts the reasoner's existing beliefs (Evans, Barston, & Pollard, 1 983; Janis & Frick, 1943 ) . This is not to say that people fail to consider logical principles when reasoning deductively. In general, explicit attention


Inductive Reasoning

5 13

to the premises seems more likely to lead to valid inferences. Explicit attention to irrelevant information more often leads to inferences based on prior beliefs regarding the believability of the conclusion ( Evans, Barston, & Pollard, 1983 ). To enhance our deductive reasoning, we may try to avoid heuristics and biases that distort our reasoning. We also may engage in practices that facilitate reasoning. For example, we may take longer to reach or to evaluate conclusions. Effective rea­ soners also consider more alternative conclusions than do poor reasoners (Gaiotti, Baron, & Sabini, 1986). In addition, training and practice seem to increase perfor­ mance on reasoning tasks. The benefits of training tend to be strong when the train­ ing relates to pragmatic reasoning schemas (Cheng & associates, 1 986) or to such fields as law and medicine (Lehman, Lempert, & N isbett, 1 987 ) . The benefits are weaker for abstract logical problems divorced from our everyday life (see Holland & associates, 1 986; Holyoak & Nisbett, 1988). One factor that affects syllogistic reasoning is mood. When people are in a sad mood, they tend to pay more attention to details (Schwarz & Skurnik, 2003 ) . Hence, perhaps surprisingly, they tend to do better in syllogistic reasoning tasks when they are in a sad mood than when they are in a happy mood (Fiedler, 1 988; Melton, 1 995 ) . People in a neutral mood tend t o show performance i n between the two extremes. Even without training you can improve your own deductive reasoning through devel­ oping strategies to avoid making errors. Make sure you are using the proper strategies in solving syllogisms. Remember that reversals only work with universal negatives. Sometimes translating abstract terms to concrete ones (e.g., the letter C to cows ) can help. Also, take the time to consider contrary examples and create more mental mod­ els. The more mental models you use for a given set of premises, the more confident you can be that if your conclusion is not valid, it will be disconfirmed. Thus, the use of multiple mental models increases the likelihood of avoiding errors. The use of mul­ tiple mental models also helps you to avoid the tendency to engage in confirmation bias. Circle diagrams also can be helpful in solving deductive-reasoning problems.

Inductive Reasoning
With deductive reasoning, reaching logically certain--deductively valid---conclusions is at least theoretically possible. In inductive reasoning, which is based on our observa­ tions, reaching any logically certain conclusion is not possible. The most that we can strive to reach is only a strong, or highly probable, conclusion (Johnson-Laird, 2000; Thagard, 1999). For example, suppose that you notice that all the people enrolled in your cogni­ tive psychology course are on the dean's list ( or honor roll). From these observa­ tions , you could reason inductively that all students who enroll in cognitive psy­ chology are excellent students (or at least earn the grades to give that impression). However, unless you can observe the grade-point averages of all people who ever have taken or ever will take cognitive psychology, you will be unable to prove your conclusion. Further, a s ingle poor student who happened to enroll in a cognitive psychology course would disprove your conclusion. Still, after large numbers of


514

Chapter

12



Decision Making and Reasoning

observations, you might conclude that you had made enough observations to reason inductively. The fundamental riddle of induction is how we can make any inductions at all. As the future has not happened, how can we predict what it will bring? There is also an important so-called new riddle of induction (Goodman, 1983 ) . Given possible alternative futures, how do we know which one to predict? For example, in the number series problem 2, 4, 6, ?, most people would replace the question mark with an 8. But we cannot know for sure that the correct number is 8. A mathematical formula could be proposed that would yield any number at all as the next number. So why choose the pattern of ascending even numbers ( 2x, where x is increasing integers) ? Partly we choose it because it seems simple to us. It is a less complex formula than others we might choose. And partly we choose it because we are familiar with it. We are used to ascending series of even numbers. But we are not used to other complex series in which 2, 4, 6, may be embedded, such as 2, 4, 6, 1 0, 1 2 , 14, 18, 20, 22, and so forth. In this situation and in many others requiring reasoning, you were not given clearly stated premises or obvious, certain relationships between the elements. Such information could lead you to deduce a surefire conclusion. In its absence, you cannot deduce a logically valid conclusion at all. At these times, an alternative kind of rea­ soning is needed. Inductive reasoning involves reasoning where there is no logically certain conclusion. Often it involves reasoning from specific facts or observations to a general conclusion that may explain the facts. Inductive reasoning forms the basis of the empirical method. In it, we cannot logically leap from saying, "All observed instances to date of X are Y" to saying, "Therefore, all X are y' '' It is always possible that the next observed X will not be a Y. Furthermore, regardless of the number of observations or the soundness of the reasoning, no inductively based conclusions can be proved. Such conclusions only can be supported, to a greater or lesser degree, by available evidence. Thus, we return to the need to consider probability. The inductive reasoner must state any conclu­ sions about a hypothesis in terms of likelihoods. Examples are "There is a 99% chance of rain tomorrow," or "The probability is only 0.05 that the null hypothesis is correct in asserting that these findings are a result of random variation." Cognitive psychologists probably agree on at least two of the reasons why people use inductive reasoning. First, it helps them to become increasingly able to make sense out of the great variability in their environment. Second, it also helps them to predict events in their environment, thereby reducing their uncertainty. Thus, cogni­ tive psychologists seek to understand the how rather than the why of inductive rea­ soning. We may (or may not} have some innate schema-acquisition device. But we certainly are not born with all the inferences we manage to induce. We already have implied that inductive reasoning often involves the processes of generating and test­ ing hypotheses. We may further figure out that we reach inferences by generalizing some broad understandings from a set of specific instances. As we observe additional instances, we may further broaden our understanding. Or we may infer specialized exceptions to 'the general understandings. For example, after observing quite a few birds, we may infer that birds can fly. But after observing penguins and ostriches, we may add to our generalized knowledge specialized exceptions for flightless birds. During generalization, we observe that particular properties vary together across diverse instances of a concept, or we may observe that particular procedures covary


Inductive Reasoning

515

across different events. We then can induce some general principles for those co­ variations. The great puzzle of inductive reasoning is how we manage to infer useful general principles based on the huge number of observations of covariation to which we are constantly exposed. Humans do not approach induction with mind-staggering computational abilities to calculate every possible covariation. Nor can we derive inferences from just the most frequent or the most plausible of these covariations. Rather, we seem to approach this task as we approach so many other cognitive tasks. We look for shortcuts. Inductive reasoners, like other probabilistic reasoners, use heuristics. Examples are representativeness, availability, the law of large numbers, and the unusualness heuristic. When using the unusualness heuristic, we pay particular attention to unusual events. When two unusual events co-occur or occur in proximity to one another, we tend to assume that the two events are connected in some way. For example, we might infer that the former unusual event caused the latter one ( Holyoak & N isbett, 1988).

Reaching Causal Inferences
One approach to studying inductive reasoning is to examine causal inferences-how people make j udgments about whether something causes something else ( Cheng, 1 997, 1 999; Cheng & Holyoak, 1 995 ; Koslowski, 1 996; Spellman, 1 997 ) . One of the first investigators to propose a theory of how people make causal judgments was John Stuart Mill ( 1 887). He proposed a set of canons-widely accepted heuristic principles on which people may base their judgments. For example, one of Mill's canons is the method of agreement. It involves making separate lists of the possible causes that are present and those that are absent when a given outcome occurs. If, of all the possible causes, only one is present in all instances of the given outcome, the observer can conclude inductively that the one cause present in all instances is the true cause. That is, despite all the differences among possible causes, there is agreement in terms of one cause and one effect. For example, suppose a number of people in a given community contracted hepatitis. The local health authorities would try to track down all the various possible means by which each of the hepatitis sufferers had contracted the disease. Now sup­ pose it turned out that they all lived in different neighborhoods, shopped at different grocery stores, had different physicians and dentists, and otherwise led very different lives but that they all ate in the same restaurant on a given night. The health au­ thorities probably inductively would conclude that they contracted hepatitis while eating at that restaurant. Another of Mill's canons is the method of difference . In this method, you observe that all the circumstances in which a given phenomenon occurs are j ust like those in which it does not occur except for one way in which they differ. For example, suppose that a particular group of students all live in the same dormitory, eat the same food in the same dining halls, sleep on the same schedule, and take all the same classes. But some of the students attend one discussion group, and other students attend another. The students in discussion group A get straight As. But the students in discussion group B get straight Cs. We could conclude inductively that something is happening in the discussion groups to lead to this difference. Does this method sound familiar? If the observer manipulated the various aspects of this method, the method might be


516

Chapter 12
TABLE 1 2.6



Decision Making and Reasoning
Market Analyst Observations Regarding Cosmetics Manufacturers

Based on the information given here, how would you determine causality?
Company
1

The office staff of the company organized and joined a union. The company's major product was under suspicion as a carcinogen. The office staff of the company did not organize and join a union. The company's major product was under suspicion as a carcinogen. Illegal campaign contributions were traced to the company's managers. The company's major product was not under suspi­ cion as a carcinogen.

There was a drastic drop in the value of the company's stock. There was a drastic drop in the value of the company's stock. There was no drastic drop in the value of the company's stock.

Company 2

Company 3

called an empirical experiment: You would hold constant all the variables but one. You would manipulate this variable to observe whether it is distinctively associated with the predicted outcome. In fact, inductive reasoning may be viewed as hypothesis testing (Bruner, Goodnow, & Austin, 1 956). One study investigated causal inference by giving people scenarios such as the one shown in Table 1 2 .6 ( Schustack & Sternberg, 1 98 1 ) . Participants were to use the information describing the consequences for each company. They needed to figure out whether a company's stock values would drop if the company's major product were under suspicion as a carcinogen. People used four pieces of information to make causal judgments, as shown in Table 1 2 . 7 . Specifically, they tended to confirm that an event was causal in one of two ways. The first was based on the joint presence of the possibly causal event and the outcome. The second was based on the joint absence of the possibly causal event and the outcome. They tended to disconfirm the causality of a particular antecedent event in two ways as well. One was based on the presence of the possibly causal event but the absence of the outcome. The other was based on the absence of the possibly causal event but the presence of the outcome. In these ways, people can be quite rational in making causal judgments. However, we do fall prey to various common errors of inductive reasoning. One common error of induc­ tion relates to the law of large numbers. Under some circumstances, we recognize that a greater number of observations strengthens the likelihood of our conclusions. At other times, however, we fail to consider the size of the sample we have observed when assessing the strength or the likelihood of a particular inference. In addition, most of us tend to ignore base-rate information. Instead, we focus on unusual varia­ tions or salient anecdotal ones. Awareness of these errors can help us improve our decision making. Perhaps our greatest failing is one that extends to psychologists, other scientists, and nonscientists: We demonstrate confirmation bias, which may lead us to errors such as illusory ' correlations (Chapman & Chapman, 1 967, 1969, 1 9 7 5 ) . Further­ more, we frequently make mistakes when attempting to determine causality based on correlational evidence alone. As has been stated many times, correlational evidence cannot indicate the direction of causation. Suppose we observe a correlation between Factor A and Factor B. We may find one of three things. First, it may be that Factor


I

Inductive Reasoning
TABLE 1 2.7
Four Bases for I nferring Causality

517

Even nonlogicians often use available information effectively when assessing causality.
CAUSAL INFERENCE BASIS FOR INFERENCE

EXPLANATION

EXAMPLE

Confirmation

The joint presence of the possibly causal event and the outcome

If an event and an outcome tend to co­ occur, people are more likely to believe that the event causes the outcome. If the outcome does not occur in the ab­ sence of the possibly causal event, then peo­ ple are more likely to believe that the event causes the outcome. If the possibly causal event is present but not the outcome, then the event is seen as less likely to lead to the outcome. If the outcome occurs in the absence of the possibly causal event, then the event is seen as less likely to lead to the outcome. (This rule is one of Mill's canons.)

If some other company had a major product suspected to be a carcinogen and its stock went down, that pairing of facts would increase peo­ ple's belief that having a major product labeled as a carcinogen depresses stock values. If other companies' stocks have not gone down when they had no products labeled as carcino­ gens, then the absence of both the carcinogens among the major products and the stock drops is at least consistent with the notion that having a product labeled as a carcinogen might cause stocks to drop. If other companies have had major products labeled as carcinogens but their stocks have not gone down, people would be more likely to conclude that having a major product labeled as a carcinogen does not lead to drops in stock prices. If other companies have had stock prices drop without having products labeled as carcinogens, people would be less likely to infer that having a product labeled as a carcinogen leads to de­ creases in stock prices.

Confirmation

The joint absence of the possibly causal event and the outcome

Disconfirmation

The presence of the possibly causal event but the absence of the outcome The absence of the possibly causal event but the presence of the outcome

Disconfirmation

A causes Factor B. Second, it may be that Factor B causes Factor A. Third, some higher order Factor C may be causing both Factors A and B to occur together. A related error occurs when we fail to recognize that many phenomena have multiple causes. For example, a car accident often involves several causes. For ex­ ample, it may have originated with the negligence of several drivers, rather than just one. Once we have identified one of the suspected causes of a phenomenon, we may commit what is known as a discounting error. We stop searching for additional alterna­ tive or contributing causes. Confirmation bias can have a major effect on our everyday lives. For example, we may meet someone, expecting not to like him. As a result, we may treat him in ways that are different from how we would treat him if we expected to like him. He then may


518

Chapter 1 2



Decision Making and Reasoning

respond to us in less favorable ways. He thereby "confirms" our original belief that he is not likable. Confirmation bias thereby can play a major role in schooling. Teachers of­ ten expect little of students when they think them low in ability. The students then give the teachers little. The teachers' original beliefs are thereby "confirmed" (Sternberg, 1997 ) . This effect is referred to as a self-fulfilling prophecy ( Harber & Jussim, 2005 ) . Research has investigated the relationship between covariation (correlation) information and causal inferences (Ahn & associates, 1 995; Ahn & Bailenson, 1 996) . For some information to contribute to causal inferences, the information nec­ essarily must be correlated with the event. But this covariation information is not sufficient to imply causality. The researchers proposed that the covariation informa­ tion also must provide information about a possible causal mechanism for the infor­ mation to contribute to causal inferences. Consider their example. In attempting to determine the cause of Jane's car accident last night, one could use purely covariation information. An example would be "Jane is more likely than most to have a car ac­ cident" and "car accidents were more likely to have occurred last night." However, people prefer information specifically about causal mechanisms (Ahn & associates, 1 995 ) . An example would be "Jane was not wearing her glasses last night" and "the road was icy," in making causal attributions over information that only covaries with the car accident event. Both these latter pieces of information about Jane's car acci­ dent can be considered covariation information, but the descriptions provide addi­ tional causal mechanism information. In Chapter 8, we discussed the theory-based model of concepts. People's theories affect not only the concepts they have but also the causal inferences they make with these concepts. Consider a set of studies investigating how clinical psychologists make inferences about patients who come to them with various kinds of disorders. Typically, they use the Diagnostic and Statistical Manual of Mental Disorders (fourth edition; DSM­ IV, American Psychiatric Association, 1 994) to make such diagnoses. The DSM-IV is atheoretical. In other words, it is based on no theory. In five experiments, clinicians were asked to diagnose patients with disorders that were either causally central or causally peripheral to the clinicians' own theories of disorders. Participants were more likely to diagnose a hypothetical patient as having a disorder if that disorder was more causally central to the clinicians' own belief system. In other words, the clinicians' own implicit theories trumped the DSM-IV in their diagnosing. The theory-based model of categorization posits that concepts are represented as theories, not feature lists. Thus, it is interesting that the DSM-IV established atheo­ retical guidelines for mental disorder diagnosis. Five experiments investigated how cli­ nicians handled an atheoretical nosology (Kim & Ahn, 2002). The investigators used a variety of methods to ascertain clinicians' implicit personal theories of disorders. In this way, it was possible to discover the clinicians' causal theories of disorders. Then the clinicians' responses on diagnostic and memory tasks were measured. Of particular in­ terest here is how the clinicians decided whether a patient had a particular disorder. Did they primarily use the DSM-IV, the standard reference manual in the field? Or did they use their own causal theories of disorders? Participants were more likely to diagnose a hypothetical patient with a disorder if that patient had causally central rather than causally peripheral symptoms according to their theory of the disorder. In other words, the more causally central to their own implicit theory a diagnosis was, the more likely they were to give that diagnosis in response to a set of symptoms. Their memory for causally central symptoms was also better than their memory for symptoms they saw as


Inductive Reasoning

519

causally peripheral. Clinicians are thus driven to use their own implicit causal theories, even if they have had decades of practice with the atheoretical DSM. This set of studies provides strong support for the theory-based notion of concepts discussed in Chapter 8. But it also shows that these theories do not just "sit" in the head. They are actively used when we do causal reasoning. Even experts prefer their causal theories over a standard reference work in their field (DSM-IV) . An alternative view is that people act as naive scientists in positing unobservable theoretical causal power entities to explain observable covariations (Cheng, 1997). Thus, people are somewhat rational in making causal attributions based on the right kinds of covariation information.

Categorical Inferences
On what basis do people draw inferences ? People generally use both bottom-up strat­ egies and top-down strategies for doing so (Holyoak & N isbett, 1988). That is, they use both information from their sensory experiences and information based on what they already know or have inferred previously. Bottom-up strategies are based on observing various instances and considering the degree of variability across instances. From these observations, we abstract a prototype (see Chapters 6 and 9). Once a prototype or a category has been induced, the individual may use focused sampling to add new instances to the category. He or she focuses chiefly on properties that have provided useful distinctions in the past. Top-down strategies include selectively searching for constancies within many variations and selectively combining existing concepts and categories.

Reasoning by Analogy
Inductive reasoning may be applied to a broader range of situations than those requiring causal or categorical inferences. For example, inductive reasoning may be applied to reasoning by analogy. Consider an example analogy problem: "Fire is to asbestos as water is to (a) vinyl, (b) air, (c) cotton, (d) faucet." In reasoning by analogy, the rea­ soner must observe the first pair of items ("fire" and "asbestos" in this example) and must induce from those two items one or more relations ( in this case, surface resistance because surfaces coated with asbestos can resist fire). The reasoner then must apply the given relation in the second part of the analogy. In the example analogy, the reasoner chooses the solution to be "vinyl" because surfaces coated with vinyl can resist water. Some investigators have used reaction-time methodology to figure out how peo­ ple solve induction problems. For example, using mathematical modeling I was able to break down the amounts of time participants spent on various processes of ana­ logical reasoning. I found that most of the time spent in solving simple verbal analo­ gies is spent in encoding the terms and in responding (Sternberg, 1977). Only a small part actually is spent in doing reasoning operations on these encodings. The difficulty of encoding can become even greater in various puzzling analogies. For example, in the analogy RAT : TAR :: BAT : (a. CONCRETE, b. MAMMAL, c. TAB, d. TAIL), the difficulty is in encoding the analogy as one involving letter re­ versal rather than semantic content for its solution. In a problematic analogy such as AUDACIOUS : TIMOROUS :: MITIGATE : (a. ADUMBRATE, b. EXACERBATE, c. EXPOSTULATE, d. EVISCERATE), the difficulty is in recognizing the meanings of


I

5 20

Chapter

12



Decision Making and Reasoning

the words. If reasoners know the meanings of the words, they probably will find it rela­ tively easy to figure out that the relation is one of antonyms. (Did this example auda­ ciously exacerbate your difficulties in solving problems involving analogies?) An application of analogies in reasoning can be seen in politics. It has been noted that analogies can help governing bodies come to conclusions (Breuning, 2003 ) . It has also been argued that these analogies can be effectively used to conveying the justification of the decision to the public (Breuning, 2003 ). However, the use of analogies is not always successful. For example, the failure to reach a diplomatic out­ come in Kosovo in 1 999 may have been due to the selection and use of an inappropri­ ate analogy ( Hehir, 2006) . These findings highlight both the utility and possible pitfalls of using analogies in political deliberation. In 2007, opponents of President Bush used an analogy to Vietnam to argue for withdrawing from Iraq. They asserted that the failure of U.S. policies to lead to a conclusive victory were analogous between Vietnam and Iraq. Bush then turned the tables, using an analogy to Vietnam to argue that withdrawal from Iraq could lead to mass slaughter, as he asserted happened in Vietnam after the Americans left. Thus, analogies can end up being largely in the eye of the beholder rather than in the actual elements being compared.

Development of Inductive Reasoning
Young children do not have the same inductive-reasoning skills as do older children. For example, 4 year olds appear not to induce generalized biological principles about animals when given specific information about individual animals (Carey, 1987). By age 10 years, however, children are much more likely to do so. For example, if 4 year olds are told that both dogs and bees have a particular body organ, they still assume that only animals that are highly similar either to dogs or to bees have this organ and that other animals do not. In contrast, 1 0 year olds would induce that if animals as dissimilar as dogs and bees have this organ, many other animals are likely to have this organ as well. Also, 10 year olds would be much more likely than 4 year olds to induce biological principles that link humans to other animals. Along the same lines, when 5 year olds learn new information about a specific kind of animal, they seem to add the information to their existing schemas for the particular kind of animal but not to modify their overall schemas for animals or for biology as a whole (see Keil, 1 989, 1 999). However, first- and second-graders have shown an ability to choose and even to spontaneously generate appropriate tests for gathering indirect evidence to confirm or disconfirm alternative hypotheses (Sodian, Zaitchik, & Carey, 1 99 1 ). Even children as young as 3 years seem to induce some general principles from specific observations, particularly those principles that pertain to taxonomic catego­ ries for animals (Gelman, 1984/1985; Gelman & Markman, 1 98 7 ) . For example, preschoolers were able to induce principles that correctly attribute the cause of phe­ nomena (such as growth) to natural processes rather than to human intervention (Gelman & Kremer, 1 99 1 ; Hickling & Gelman, 1 995 ) . In related work, preschoolers were able to reason correctly that a blackbird was more likely to behave like a fla­ mingo than like a bat because blackbirds and flamingos are both birds (Gelman & Markman, 1 98 7 ) . Note that in this example, preschoolers are going against their perception that blackbirds look more like bats than like flamingos, basing their judg-


An Alternative View of Reasoning

521

ment instead on the fact that they are both birds ( although the effect i s admittedly strongest when the term bird also is used in regard to both the flamingo and the black­ bird). Sobel and Kirkham ( 2006) extended these findings to 24-month-old children. Additionally, these experimenters demonstrated reasoning in children as young as 8 months. Infants in this study were able to predict the spatial location of a future event based on the observation of previous events. Although the purpose of words is largely to express meaning-for example, to indicate a dog by the use of the word dog or a flamingo by the use of the word flamingo-there is some evidence that the process is not wholly one directional. Some­ times, children use words whose meanings they do not understand and only gradually acquire the correct meaning after they have started to use the words (Kessler Shaw, 1 999). Nelson ( 1 999) refers to this phenomenon as "use without meaning." Other work supports the view that preschoolers may make decisions based on in­ duced general principles rather than on perceptual appearances. For example, they may induce taxonomic categories based on functions (such as means of breathing) rather than on perceptual appearances (such as apparent weight) (Gelman & Markman, 1986). When given information about the internal parts of objects in one category, preschoolers also induced that other objects in the same category were likely to have the same inter­ nal parts (Gelman & O'Reilly, 1 988; see also Gelman & Wellman, 1 99 1 ) . However, when inducing principles from discrete information, young preschool­ ers were more likely than older children to emphasize external, superficial features of animals than to give weight to internal structural or functional features. Also, given the same specific information, older children seem to induce richer inferences regard­ ing biological properties than do younger children (Gelman, 1 989). It is important to maintain both forms of knowledge, appearance based and prin­ cipled, for flexible use across different situations and domains (Wellman & Gelman, 1 998). Knowledge about deep internal functional relationships is important for in­ ducing properties of objects. But similarity in appearance is also important under other circumstances. Knowledge acquisition develops via the use of framework theo­ ries, or models, for drawing inferences about the environment in various domains (such as physics, psychology, and biology) (Wellman & Gelman, 1998). Numerous studies demonstrate children's early and rapid acquisition of expertise in understand­ ing physical objects and causal relations among events, psychological entities and casual-explanatory reasoning, and biological entities and forces. The changes in rea­ soning about factors in these domains appear to show enhanced understanding of the relation between appearances and deeper functional principles. Thus, children use foundational knowledge within different domains to build framework understandings of the world.

An Alternative View of Reasoning
By now you have reasonably inferred that cognitive psychologists often disagree­ sometimes rather heatedly-about how and why people reason as they do. An alterna­ tive perspective on reasoning has been proposed. It is that two complementary systems of reasoning can be distinguished. The first is an associative system, which involves mental operations based on observed similarities and temporal contiguities ( i.e., ten-


I

522

Chapter

12



Decision Making and Reasoning

dencies for things to occur close together in time). The second is a rule-based system, which involves manipulations based on the relations among symbols (Sloman, 1 996). The associative system can lead to speedy responses that are highly sensitive to patterns and to general tendencies. Through this system we detect similarities between observed patterns and patterns stored in memory. We may pay more attention to salient features (e.g., highly typical or highly atypical ones) than to defining features of a pat­ tern. This system imposes rather loose constraints that may inhibit the selection of patterns that are poor matches to the observed pattern. It favors remembered patterns that are better matches to the observed pattern. An example of associative reasoning is use of the representativeness heuristic. Another example is the belief-bias effects in syllogistic reasoning. This effect occurs when we agree more with syllogisms that affirm our beliefs, whether or not these syllogisms are logically valid. An example of the work­ ings of the associative system may be in the false-consensus effect. Here, people believe that their own behavior and judgments are more common and more appropriate than those of other people (Ross, Greene, & House, 1977). Suppose people have an opinion on an issue. They are likely to believe that because it is their opinion, it is likely to be shared and believed to be correct by others. Of course, there is some diagnostic value in one's own opinions. It is quite possible that others do indeed believe what one believes ( Dawes & Mulford, 1 996; Krueger, 1 998). On the whole, however, associating others' views with our own simply because they are our own is a questionable practice. The rule-based system usually requires more deliberate, sometimes painstaking procedures for reaching conclusions. Through this system, we carefully analyze rele­ vant features (e.g., defining features) of the available data, based on rules stored in memory. This system imposes rigid constraints that rule out possibilities that violate the rules. Evidence of rule-based reasoning are several. First, we can recognize logical arguments when they are explained to us. Second, we can recognize the need to make categorizations based on defining features despite similarities in typical features. For example, we can recognize that a coin with a 3 -inch diameter, which looks exactly like a quarter, must be a counterfeit. Third, we can rule out impossibilities, such as cats conceiving and giving birth to puppies. Fourth, we can recognize many improb­ abilities. For example, it is unlikely that the U.S. Congress will pass a law that pro­ vides annual salaries to all full-time college students. According to Sloman, we need both complementary systems. We need to respond quickly and easily to everyday situations, based on observed similarities and temporal contiguities. Yet we also need a means for evaluating our responses more deliberately. The two systems may be conceptualized within a connectionist framework (Slo­ man, 1 996) . The associative system is represented easily in terms of pattern activation and inhibition, which readily fits the connectionist model. The rule-based system may be represented as a system of production rules (see Chapter 1 1 ). An alternative connectionist view suggests that deductive reasoning may occur when a given pattern of activation in one set of nodes (e.g., those associated with a particular premise or set of premises) entails or produces a particular pattern of activa­ tion in a second' set of nodes (Rips, 1 994). Similarly, a connectionist model of induc­ tive reasoning may involve the repeated activation of a series of similar patterns across various instances. This repeated activation then may strengthen the links among the activated nodes. It thereby leads to generalization or abstraction of the pattern for a variety of instances.


I

An Alternative View of Reasoning
Connectionist models of reasoning and the various other approaches described in this chapter offer diverse views of the available data regarding how we reason and make j udgments. At present, no one theoretical model explains all the data well. But each model explains at least some of the data satisfactorily. Together, the theories help us understand human intelligence, the topic of the next and final chapter. Consider this passage from Shakespeare's Macbeth: First Apparition: Macbeth! Macbeth! Beware Macduff; Beware the thane of Fife . Dismiss me : enough. . . . Second Apparition: Be bloody, bold, and resolute; laugh to scorn the power of man, for none of woman born shall harm Macbeth. Macbeth: Then live , Macduff: what need I fear of thee? But yet I'll make assurance double sure , and take a bond of fate: thou shalt not live; that I may tell pale-hearted fear it lies , and sleep in spite of thunder. In this passage, Macbeth mistakenly took the Second Apparition's vision to mean that no man could kill him, so he boldly decided to confront Macduff. However, as we all know, Macduff was born by abdominal (Cesarean) delivery, so he did not fall into the category of men who could not harm Macbeth. Macduff eventually killed Macbeth because Macbeth came to a wrong conclusion based on the Second Apparition's pre­ monition. The First Apparition's warning about Macduff should have been heeded. Suppose you are trying to decide between buying an SUV or a subcompact car. You would like the room of the SUV, but you would like the fuel efficiency of the subcompact car. Whichever one you choose, did you make the right choice ? This is a difficult question to answer because most of our decisions are made under conditions of uncertainty. Thus, let us say that you bought the SUv. You can carry a number of people, you have the power to pull a trailer easily up a hill, and you sit higher so your road vision is much better. However, every time you fill up the gas tank, you are reminded of how much fuel this vehicle takes. On the other hand, let us say that you bought the subcompact car. When picking up friends at the airport, you have diffi­ culty fitting all of them and their luggage; you cannot pull trailers up hills (or at least, not very easily); and you sit so low that when there is an SUV in front of you, you can hardly see what is on the road. However, every time you fill up your gas tank or hear someone with an SUV complaining about how much it costs to fill up his or her tank, you see how little you have to pay for gas. Again, did you make the right choice ? There are no right nor wrong answers to most of the decisions we make. We use our best judgment at the time of our decisions and think that they are more right than wrong as opposed to definitively right or wrong.

523

Neuroscience of Reasoning
As in both problem solving and decision making, the process of reasoning involves the prefrontal cortex ( Bunge & associates, 2004). Further, reasoning involves brain areas associated with working memory, such as the basal ganglia ( Melrose, Poulin, & Stern, 2007). The basal ganglia are involved in a variety of functions, including cog-


I

5 24

Chapter

12



Decision Making and Reasoning

nition and learning. This area is also associated with the prefrontal cortex through a variety of connections (Melrose, Poulin, & Stern, 2007). Furthermore, the contribu­ tion of working-memory systems is to be expected, as reasoning involves the integra­ tion of information. Exploration of conditional reasoning through event-related po­ tential (ERP) methods revealed an increased negativity in the anterior cingulate cortex approximately 600 milliseconds and 2000 milliseconds after task presentation (Qui & associates, 2007 ) . This negativity suggests increased cognitive control, as would be expected in a reasoning task. In one study exploring moral reasoning in persons who show antisocial behaviors indicative of poor moral reasoning, malfunctions were noted in several areas within the prefrontal cortex, including the dorsal and ventral regions (Raine & Yang, 2006) . Additionally, impairments in the amygdala, hippocampus, angular gyrus, anterior cingulated, and temporal cortex were also observed. Recall that the anterior cingulate is involved in decision making and the hippocampus is involved in working memory. Therefore, it is to be expected that malfunctions in these areas would result in defi­ ciencies in reasoning.

Key Themes
Several of the themes discussed in Chapter 1 are relevant to this chapter. A first theme is rationalism versus empiricism. Consider, for example, errors in syllogistic reasoning. One way of understanding such errors is in terms of the particu­ lar logical error made, independently of the mental processes the reasoner has used. For example, affirming the consequent is a logical error. One need do no empirical research to understand at the level of symbolic logic the errors that have been made. Moreover, deductive reasoning is itself based on rationalism. A syllogism such as "All toys are chairs. All chairs are hot dogs. Therefore, all toys are hot dogs" is logically valid but factually incorrect. Thus, deductive logic can be understood at a rational level, independently of its empirical content. But if we wish to know psychologically why people make errors or what is factually true, then we need to combine empirical observations with rational logic. A second theme is domain generality versus domain specificity. The rules of de­ ductive logic apply equally in all domains. One can apply them, for example, to ab­ stract or to concrete content. But research has shown that, psychologically, deductive reasoning with concrete content is easier than reasoning with abstract content. So although the rules apply in exactly the same way generally across domains, ease of application is not psychologically equivalent across those domains. A third theme is nature versus nurture. Are people preprogrammed to be logical thinkers? Piaget, the famous Swiss cognitive-developmental psychologist, believed so. He believed that the development of logical thinking follows an inborn sequence of stages that unfold over time. According to Piaget, there is not much one can do to alter either the sequence or timing of these stages. But research has suggested that the se­ quence Piaget proposed does not unfold as he thought. For example, many people never reach his highest stage, and some children are able to reason in ways he would not have predicted they would be able to reason until they were older. So once again, nature and nurture interact.


Summary

525

Summary
1. What are some of the strategies that guide hu, man decision making? Early theories were de, signed to achieve practical mathematical models of decision making and assumed that decision makers are fully informed, infinitely sensitive to information, and completely rational. Subsequent theories began to acknowledge that humans often use subjective criteria for decision making, that chance elements often influence the outcomes of decisions, that humans often use subjective esti, mates for considering the outcomes, and that humans are not boundlessly rational in making decisions. People apparently often use satisficing strategies, settling for the first minimally accept, able option, and strategies involving a process of elimination by aspects to weed out an overabun, dance of options. One of the most common heuristics most of us use is the representativeness heuristic. We fall prey to the fallacious belief that small samples of a population resemble the whole population in all respects. Our misunderstanding of base rates and other aspects of probability often leads us to other mental shortcuts as well, such as in the conjunction fallacy and the inclusion fallacy. Another common heuristic is the availability heuristic, in which we make judgments based on information that is readily available in memory, without bothering to seek less available informa, tion. The use of heuristics such as anchoring and adjustment, illusory correlation, and framing ef, fects also often impair our ability to make effec, tive decisions. Once we have made a decision (or better yet, another person has made a decision) and the out­ come of the decision is known, we may engage in hindsight bias, skewing our perception of the ear­ lier evidence in light of the eventual outcome. Perhaps the most serious of our mental biases, how­ ever, is overconfidence, which seems to be amaz­ ingly resistant to evidence of our own errors.
2. What are some of the forms of deductive rea,
soning that people may use, and what factors facilitate or impede deductive reasoning? De-

ductive reasoning involves reaching conclusions from a set of conditional propositions or from a syllogistic pair of premises. Among the various types of syllogisms are linear syllogisms and cate­ gorical syllogisms. In addition, deductive reason­ ing may involve complex transitive-inference problems or mathematical or logical proofs in­ volving large numbers of terms. Also, deductive reasoning may involve the use of pragmatic rea­ soning schemas in practical, everyday situations. In drawing conclusions from conditional propositions, people readily apply the modus po, nens argument, particularly regarding universal affirmative propositions. Most of us have more difficulty, however, in using the modus tallens argument and in avoiding deductive fallacies such as affirming the consequent or denying the antecedent, particularly when faced with propo­ sitions involving particular propositions or nega­ tive propositions. In solving syllogisms, we have similar difficulties with particular premises and negative premises and with terms that are not presented in the customary sequence. Frequently, when trying to draw conclusions, we overextend a strategy from a situation in which it leads to a deductively valid conclusion to one in which it leads to a deductive fallacy. We also may fore­ close on a given conclusion before considering the full range of possibilities that may affect the conclusion. These mental shortcuts may be ex­ acerbated by situations in which we engage in confirmation bias ( tending to confirm our own beliefs ) . We can enhance our ability t o draw well­ reasoned conclusions in many ways, such as by taking time to evaluate the premises or proposi­ tions carefully and by forming multiple mental models of the propositions and their relation­ ships. We also may benefit from training and practice in effective deductive reasoning. We are particularly likely to reach well-reasoned conclu­ sions when such conclusions seem plausible and useful in pragmatic contexts, such as during social exchanges.


I

526

Chapter

12



Decision Making and Reasoning

3. How do people use inductive reasoning to reach
causal inferences and to reach other types of conclusions? Although we cannot reach logi­

cally certain conclusions through inductive rea­ soning, we can at least reach highly probable conclusions through careful reasoning. More than a century ago, John Stuart Mill recommended that people use various canonical strategies for reaching inductive conclusions. When making ca
egorical inferences, people tend to use both top-down and bottom-up strategies. Processes of inductive reasoning generally form the basis of scientific study and hypothesis testing as a means to derive causal inferences. In addition, in reason­ ing by analogy people often spend more time en-

coding the terms of the problem than in perform­ ing the inductive reasoning. It appears that people sometimes may use reasoning based on formal rule systems, such as by applying rules of formal logic, and sometimes use reasoning based on associa­ tions, such as by noticing similarities and tempo­ ral contiguities.
4. Are there any alternative views of reasoning?

Steven Sloman has suggested that people have two distinct systems of reasoning: an associative system that is sensitive to observed similarities and temporal contiguities and a rule-based system that involves manipulations based on relations among symbols.

Thinking about Thinking: Factual, Analytical, C reative, and Practical Questions
1. Describe some of the heuristics and biases people 6. Design a question, such as the ones used by Kah-

use while making judgments or reaching decisions.
2. What are the two logical arguments and the two

logical fallacies associated with conditional reasoning, as in the Wason selection task?
3. Which of the various approaches to conditional

neman and Tversky, which requires people to estimate subjective probabilities of two different events. Indicate the fallacies that you may expect to influence people's estimates or tell why you think people would give realistic estimates of probability.
7. Suppose that you need to rent an apartment. How

reasoning seems best to explain the available data? Give reasons for your answer.
4. Some cognitive psychologists question the merits

of studying logical formalisms such as linear or categorical syllogisms. What do you think can be gained by studying how people reason in regard to syllogisms ?
5. Based on the information in this chapter, design a

would you go about finding one that most effectively meets your requirements and your preferences ? How closely does your method resemble the methods described by subjective expected utility theory, by satisficing, or by elimination by aspects ?
8. Give two examples showing how you use rule-

way to help college students more effectively apply deductive reasoning to the problems they face.

based reasoning and associative reasoning in your everyday experiences.


Annotated Suggested Readings

527

Key Terms
availability heuristic base rate bounded rationality categorical syllogism causal inferences conditional reasoning deductive reasoning deductive validity elimination by aspects fallacy hindsight bias illusory correlation inductive reasoning judgment and decision making mental model overconfidence pragmatic reasoning schema premises proposition reasoning representativeness satisficing subjective probability subjective utility syllogism

Explore CogLab by going to http://coglab.wadsworth.com. To learn more, examine the following experiments: Risky Decisions Typical Reasoning Wason Selection Task

Annotated Suggested Readings
Leighton, ] . P., & Sternberg, R. ] . (Eds. ) . (2004). The nature of reasoning. New York: Cambridge Uni­ versity Press. A complete review of contemporary theories and research on reasoning. Sobel, D. M., & Kirkham, N. Z. ( 2006). Blickets and babies: the development of causal reasoning in toddlers and infants. Developmental Psychology, 42(6), 1 1 03-1 1 1 5 . An approachable study exam­ ining reasoning in infants.



I

Exploring Cognitive Psychology

529

1. What are the key issues in measuring intelligence? How do different researchers and theorists approach the issues?

2. What are some information-processing approaches to intelligence? 3. What are some alternative views of intelligence? 4. How have researchers attempted to simulate intelligence using machines
such as computers?

5. Can intelligence be improved, and if so, how? 6. How does intelligence develop in adults?

Before you read about how cognitive psychologists view intelligence, try responding to a few tasks that require you to use your own intelligence :
2.

3.

1. Candle is to tallow as tire is to (a) automobile, ( b) round, (c) rubber, (d) hollow. Complete this series: 100%, 0.75, Yz; (a) whole, ( b) one eighth, (c) one fourth.

The first three items form one series. Complete the analogous second series that starts with the fourth item:

D :O sD D:
: ::

6 6 D D
lal Ibl leI Idl

4. You are at a party of truth-tellers and liars. The truth-tellers always tell the truth, and the liars always lie. You meet someone new. He tells you that he just heard a conversation in which a girl said she was a liar. Is the person you met a liar or a truth-teller? ach of the preceding tasks is believed, at least by some cognitive psychologists, to require some degree of intelligence. (The answers are at the end of this section. ) Intelligence is a concept that can be viewed as tying together all of cognitive psy­ chology. Just what is intelligence ? In a recent article, researchers identified approxi­ mately 70 different definitions of intelligence (Legg & Hutter, 2007). In 1921, when .the editors of the Journal of Educational Psychology asked 14 famous psychologists that question, the responses varied but generally embraced these two themes. First, intelli­ gence involves the capacity to learn from experience. Second, it involves the ability to adapt to the surrounding environment. Sixty-five years later, twenty-four cognitive psychologists with expertise in intelligence research were asked the same question