Decisions that come to mind on their own and are not evaluated for validity are known as

Subjective Probability Judgments

M. Bar-Hillel, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Availability

If the representativeness heuristic tends to overlook category size, the availability heuristic is used primarily for judging category size—or rather, relative size. Category size relates to probability inasmuch as probability is sometimes taken to be the proportion, in the population as a whole, of instances that belong to some specific category, or possess some specific feature. A statistician who wants to estimate relative frequencies draws a random sample. The cognitive counterpart of the statistician's procedure is to ‘sample in the mind’—namely, to bring to mind, either by active search or spontaneously, examples of the target population. Sampling in the mind, however, is hardly random. It is subject to many effects that determine the ease with which examples come to mind (Tversky and Kahneman 1973). Examples of such effects are salience, recency, imaginability, and—fortunately—even actual frequency.

Effects that determine the ease of calling to mind are known as availability effects. The availability heuristic is the judgmental procedure of reliance on mental sampling, and is demonstrated in the following example.

Example 6: Subjects were read a list of 39 names of celebrities. In List 1, the 19 women were rather more famous than the 20 men, and in List 2 it was the 19 men who were more famous than the 20 women. Each person heard a single list. They were then asked whether they had heard more female names or more male names. As predicted, when the men [women] were more famous, male [female] names were more frequently recalled. They were also judged, by independent respondents, to be more frequent.

It turns out to be unnecessary actually to bring examples to mind, as it is enough just to estimate how many could be brought to mind. People seem to have a pretty good notion of availability (Tversky and Kahneman 1973), and use it to estimate probabilities.

In Example 7., the availability heuristic leads to a violation of the aforementioned extension principle, mentioned before.

Example 7.: Imagine the following scenarios, and estimate their probability. (a) A massive flood somewhere in America, in which more than 1,000 people die. (b) An earthquake in California, causing massive flooding in which more than 1,000 people die.

Respondents estimated the first event to be less likely than the second (Tversky and Kahneman 1983). As discussed earlier, the extension principle logically precludes this possibility: a flood with unspecified causes in a relatively unspecified location is necessarily more likely than one with a more specific cause and location. An earthquake in California, however, is a readily imaginable event which greatly increases the availability—hence the subjective probability—of the flood scenario.

A particularly prominent result from the availability heuristic is the unpacking effect, shown in Example 8.

Example 8.: A well-known 22-year-old Hollywood actress was admitted to a hospital emergency room with pains in the lower right abdomen, which had lasted over 12 hours. What could she be suffering from?

A group of general practitioners from the Stanford area were given this question. Half were asked to assign a probability to the following possibilities: (a) Indigestion (b) Extra-uterine pregnancy (c) Other. The list given to the other half was: (a) Indigestion (b) Extra-uterine pregnancy (c) Acute appendicitis. (d) Inflamed kidney. (e) Cervical inflammation (f) Other. The first group assigned a probability of 50 percent to the ‘Other’ category. The second group assigned a probability of 69 percent to categories (c) to (f), although of course these are but a more detailed breakdown (‘unpacking’) of ‘Other’ (Redelmeier et al. 1995). A similar effect is found even when it is hard to argue that the unpacking itself provides additional information. When the category of ‘homicide’ is broken down into ‘homicide by a stranger’ and ‘homicide by an acquaintance,’ the latter two are given a higher total probability as causes of death than the unbroken category (Tversky and Koehler 1994). Here, apparently, the different descriptions of the categories merely call attention to different aspects of the categories. Either by affecting memory or by affecting salience, enhancing the availability of subcategories through more detailed descriptions enhances their perceived probability.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006379

Heuristics in Social Cognition

F. Strack, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 The Availability Heuristic

In assessing the frequency or probability of an event (or the co-occurrence of several events), individuals often employ a strategy that is based on the ease with which bits of information can be retrieved or generated from memory. An employer wishing to gauge the rate of unemployment in their community may go to the trouble of obtaining the relevant information from official sources. But if they are not motivated or able to do that, they can try to think of unemployed friends or acquaintances. The more easily they are able to do so, the higher will be their estimate of the rate of unemployment. Tversky and Kahneman (1973) called this judgment strategy the ‘availability heuristic.’

Additional judgmental phenomena in the social domain are connected to the availability heuristic. One example is a risk whose assessment depends on the frequency with which a type of event occurs. Thus, riding a motor bicycle is risky to the extent that accidents occur frequently. However, the actual frequency of an event and the ease with which it comes to mind may be dissociated if certain events are more likely to be reported in the media. For example, Lichtenstein et al. (1978) found that causes of death frequently reported in the press were greatly overestimated in terms of their frequency. While heart disease causes 85 percent more deaths than accidents, only 20 percent of those surveyed thought that heart disease was the greater risk. The conspicuousness of events also influenced their availability. For example, in the study by Lichtenstein et al. (1978), the overestimated causes of death were especially dramatic and sensational (murder, flood, automobile accident), while the rather inconspicuous causes of death (heart disease, cancer, diabetes) were underestimated. Events that are known from personal experience are also more readily available (and therefore judged more likely to occur) than events that are only known through third parties (Greening et al. 1996).

It should be noted that ‘availability’ has two psychological components that usually are confounded: the content that comes to mind and the ease (or effort) experienced while retrieving the information from memory (Schwarz et al. 1991). Several studies from the domain of health psychology demonstrate the direct influence of experienced ease of retrieval on the assessment of risk. For example, Rothman and Schwarz (1998) found higher ratings of risk for contacting a heart disease when participants were asked to list three (easy to generate) instead of eight (difficult to generate) factors that increased their own risk.

Attitudinal judgments (see Attitude Formation: Function and Structure) about everyday topics are also made on the basis of the availability heuristic under certain conditions. Wänke et al. (1996) reported more positive attitudes towards public transportation on the part of subjects if they had been asked previously to generate three instead of seven arguments in favor of public transportation. These experimental examples were constructed such that the retrieval of more information was associated with less experienced ease of recall. The results suggest that under suboptimal conditions, judgment is based on the experience of information retrieval and not on the content of what is retrieved.

Another example refers to the ease with which we are able to imagine a different course of events. Let us look at the following example of counterfactual thinking: ‘If I had gotten up five minutes earlier this morning I would not have missed the train, I would not have been late for the exam and would have been able to read the one additional problem I needed to pass the exam.’ What is crucial is that the ease with which an event can be undone mentally influences affective reactions to this event. While the person in this example would presumably feel a great deal of anger over being five minutes late, a person who overslept an hour instead of five minutes would be less angry. For a detailed account of recent findings on contrafactual thinking see Roese and Olson (1995).

Apart from these examples of judgments and affective reactions on the basis of the perceived ease of cognitive operations, the ‘availability principle’ in its general form, that is, the finding that increased accessibility of contents (see Priming, Cognitive Psychology of) and cognitive structures influences judgments, has stimulated a host of research in social psychology. This pertains, for example, to work on the categorization of persons, causal attribution (see Attributional Processes: Psychological), the constancy of opinions after they have been discredited, or the testing of hypothesis.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767017484

Heuristics for Decision and Choice

P.M. Todd, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Heuristics for Probability Judgment

Since the rise of the heuristics-and-biases research program in the 1970s, the heuristics most widely studied within the social sciences are those that people use to make judgments or estimates of probabilities or other quantities in situations of uncertainty where they do not have complete, exact information (Tversky and Kahneman 1974). The representativeness heuristic is a means to assess the probability that an object A belongs to a particular category B (e.g., that a person described as meek is a pilot) or that an event A is generated by a particular process B (e.g., that the sequence HHTTTT was generated by throwing a fair coin). This heuristic produces probability judgments according to the extent that A is representative of, or similar to, B (e.g., meekness is not representative of pilots, so a meek person is judged as having a low probability of being a pilot). The availability heuristic can be used to produce assessments of class frequency or event probability based on how easily instances of the class or event can be mentally retrieved (e.g., plane crashes may seem like a frequent cause of death because it is easy to recall examples) or constructed (via the simulation heuristic). The anchoring-and-adjustment heuristic produces estimates of quantities by starting with a particular value (the anchor) and moving away from it (e.g., people asked to quickly estimate either 8×7×6×5×4×3×2×1 or 1×2×3×4×5×6×7×8 give a higher value in the former case, which could come from estimation based on multiplying the first few values together, creating a higher or lower anchor, and then adjusting upwards, which yields a higher overall estimate of the first product).

Most researchers agree that such heuristics are widely used because they usually lead to good choices without much time or mental effort. Despite this, most of the large body of evidence amassed that is consistent with the use of these heuristics comes from showing where they break down and lead to errors (e.g., availability judgments can be manipulated by vivid examples like plane crashes, and anchoring can lead to different choices solely due to the order of information presentation—see Decision Biases, Cognitive Psychology of), leading many to conclude that people are poor decision makers after all. These heuristics have also been criticized as being vague redescriptions of observed choice behavior that are therefore difficult to falsify; however, more specific versions (e.g., testing whether availability works in terms of ease of recall or number of items recalled) are also being explored.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B008043076700629X

Decision Making, Psychology of

J. van der Pligt, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Heuristics

The study of heuristics tends to focus on systematic errors in human decision making and these heuristics often help to understand anomalies of inferring expectations from evidence (see Heuristics in Social Cognition).

Three heuristics that deal with probabilistic thinking have received considerable attention: (a) availability, (b) representativeness, and (c) anchoring and adjustment. The availability heuristic refers to the tendency to assess the probability of an event based on the ease with which instances of that event come to mind. This heuristic has been investigated in a variety of domains and relates probability estimates to memory access. Generally people overestimate the probability of an event if concrete instances of that event are easily accessible in memory. Generally, ease of recall and frequency of occurrence are correlated. A number of factors that affect memory are, however, unrelated to probability. For example, vivid images are easier to recall than pallid ones. Thus, having been involved in a serious car accident is likely to be better remembered than annual statistics about the frequency of (types of) traffic accidents. The former is likely to have more impact on probability estimates than the latter. Dawes (1994) argued that the salience of negative and relatively extreme exemplars of drug addicts can bias policy-makers' perceptions of the entire group and result in negative attitudes toward programs such as the provision of clean needles to prevent a further spread of the AIDS virus.

The representativeness heuristic refers to the tendency to assess the probability that a stimulus belongs to a particular class by judging the degree to which that event corresponds to an appropriate mental model. Kahneman and Tversky (1973) reported a well-known example of how ignoring prior probabilities can affect judgment. In their study, respondents were provided with brief personality sketches, supposedly of engineers and lawyers. They were asked to assess the probability that each sketch described a member of one profession or the other. Half the respondents were told the population from which the sketches were drawn consisted of 30 engineers and 70 lawyers, the remaining respondents were told that there were 70 engineers and 30 lawyers. Findings showed that the prior probabilities were essentially ignored, and that respondents estimated the probability of class membership by judging how similar each personality sketch was to their mental model of an engineer or a lawyer.

Anchoring and adjustment refers to a general judgment process in which an initially given or generated response serves as an anchor, and other information is insufficiently used to adjust that response. The anchoring and adjustment heuristic is based on the assumption that people often start their judgmental process by focusing on some initial value that serves as an anchor. The biases related to this heuristic stem from two distinct aspects. First, one could use irrelevant anchors, second one could insufficiently adjust up or down from an original starting value or anchor.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767017502

Psychology

Sean B. Eom, in Encyclopedia of Information Systems, 2003

III Review of Important Psychology Theories/concepts

Of these numerous fields of psychology as discussed earlier, cognitive psychology and social psychology have significantly influenced the formation of information systems subspecialties. The central component of cognitive psychology, which is the study of the human mind, is concerned with adults' normal, typical cognitive activities of knowing/perceiving/learning. Cognitive scientists view the human mind as an information processing system that receives, stores, retrieves, transforms, and transmits information (the computational or information processing view). The central hypothesis of cognitive psychology is that “thinking can best be understood in terms of representational structures in mind and computational procedures that operate on those structures.” Cognitive science originated in the mid-1950s when researchers in several fields (philosophy, psychology, AI, neuroscience, linguistics, and anthropology) began to develop theories of the workings of the mind—particularly knowledge, its acquisitions, storage, and use in intelligent activity.

A major concern of cognitive psychology deals with the cognitive architecture (Fig. 2), referring to information processing structure and its capacity. The cognitive architecture is comprised of three subsystems: sensory input systems (vision, hearing, taste/smell, touch), central processing systems, and output systems (motor outputs and decision and actions). The central processing systems perform a variety of activities to process information gathered from the sensory input systems. They include memory and learning, such as processing/ storing/retrieving visual (mental imagery) and auditory/ echoic information, and representing that information in memory. Other areas of central processing activities include problem solving (any goal-directed sequence of cognitive operations), reasoning, and creativity. Specifically, they include cognitive skills in problem solving; reasoning including that about probability; judgment and choice; recognizing pattern, speech sounds, words, and shapes; representing declarative and procedural knowledge; and structure and meaning of languages, including morphology and phonology.

Decisions that come to mind on their own and are not evaluated for validity are known as

Figure 2. A global view of the cognitive architecture [adapted from Stillings, et al. (1995). Cognitive Science: An Introduction, (Second Edition). Cambridge, MA: MIT Press].

The central processing system can be viewed for the information processing perspective as having a three-stage process:

1

Sensory memory can be characterized by large capacity, very short duration of storage, and direct association of with sensory processes. The initial attention process selects information for further processing.

2

Short-term memory (also known as working memory) has limited capacity where information selected from sensory memory is linked with information retrieved from long-term memory (past experiences and world knowledge) that forms the basis for our actions/behavior.

3

Long-term memory is characterized by large capacity, long duration. Our episodic (every day experiences) and semantic (world knowledge) information is stored in the long-term memory. Much of the research dealing with connective models, such as neural nets, focus on the structure of this long-term memory, although some researchers also expand the neural net perspective to include sensory and short-term memory. Topics such as language, concept formation, problem solving, reasoning, creativity, and decision making are usually associated with the structure and processes of this long-term memory. Also, the issue of “experts” versus “novices” is almost always linked with the structure of secondary memory.

Perhaps, behavioral decision theory may be the most influential theory developed by cognitive scientists that has contributed toward the developments of DSS research subspecialities. Behavioral decision theory is concerned with normative and descriptive decision theories. The normative decision theory aims at “prescribing courses of action that conform most closely to the decision maker's beliefs and values.” Behavioral decision theorists proposed decision theories solely on the basis of behavioral evidence, without presenting neurological internal constructs on which to base these mechanisms. Descriptive decision theory aims at “describing these beliefs and values and the manner in which individuals incorporate them into their decisions.” Descriptive decision theories have focused on three main areas of studies: judgment, inference, and choice. The study of judgment, inference, and choice has been one of the most important areas of cognitive psychology research, which has been referenced most frequently by DSS researchers.

III.A Judgment and Inference

The fundamental factor distinguishing DSS from any other CBIS is the use of judgment in every stage of the decision-making process, such as intelligence, design, choice, and implementation. The crucial part of cognitive psychology is the study of internal mental processes of knowing/learning/decision making, etc., mental limitations, and the impacts of the limitations on the mental processes.

Many DSS can help decision makers generate numerous decision alternatives. The decision makers use intuitive judgment of probability of future events, such as annual market growth rate, annual rate of inflation, etc. Tversky and Kahneman uncovered three heuristics employed when making judgment under uncertainty (representativeness, availability of instances, and adjustment from an anchor), which are usually effective, but often lead to systematic and predictable errors. Due to the cognitive limitations, the reliance on judgmental heuristics often leads to cognitive biases that eventually may cause ineffective decisions. The representativeness heuristic is applied when people are asked to judge the probability that object/event A belongs to class/process B. According to Tversky and Kahneman, the judgment of probability can be biased by many factors related to a human being's cognitive limitations, such as (1) insensitivity to prior probability of outcomes, (2) insensitivity to sample size, (3) the misconception of chance, (4) insensitivity to predictability, (5) the illusion of validity, and (6) the misconception of regression. The availability heuristic is used when people are asked to estimate the plausibility of an event. The employment of the availability heuristic may lead to predictable biases due to (1) the retrievableness of instances, (2) the effectiveness of a search set, (3) imaginableness, and (4) illusory correlation. The anchoring and adjustment effects can also bias the estimation of various quantities stated in percentage or in the form of probability distribution due to insufficient adjustment and/or biases in the evaluation of conjunctive and disjunctive events, etc.

Hogarth is another cognitive scientist whose profound insight on human judgment has made a substantial contribution to the DSS area. A substantial part of his research is devoted to compiling a catalogue of human judgmental fallibilities and information-processing biases in judgment and choice. He presented a conceptual model of judgment in which judgmental accuracy is described as a function of both individual characteristics and the structure of task environment within which the person makes judgments. Human judgments are based on information that has been processed and transformed by the limited human information processing capacity. He further decomposed the information processing activities of decision makers into: (1) acquisition of information; (2) processing information; (3) output; (4) action; and (5) outcome. He emphasized that judgmental biases can occur at every stage of information processing and that judgments are the result of interaction between the structure of tasks and the nature of human information processing systems. Decision aids are necessary in structuring the problem and assessing consequences, since intuitive judgment is inevitably deficient.

Einhorn and Hogarth reviewed behavioral decision theory to place it within a broad psychological context. In doing so, they emphasized the importance of attention, memory, cognitive representation, conflict, learning, and feedback to elucidate the basic psychological processes underlying judgment and choice. They concluded that decision makers use different decision processes for different tasks. The decision processes are sensitive to seemingly minor changes in the task-related factors.

III.B Choice

Cognitive psychologists have long been interested in the area of problem solving. Payne and his colleagues attempted to understand the psychological/cognitive process that led to a particular choice or judgment using two process tracing methods—verbal protocol analysis and information acquisition behavior analysis. The verbal protocol is a record of the subject's ongoing behavior, taken from continuous verbal reports from the subject while performing the decision task (rather than through later questionnaires or interviews).

III.B.1 Four Information Processing Strategies

Payne and his colleagues focused on the identification of the information processing strategies/models and the task characteristics of the decision situation when choosing among multidimensional/multicriteria alternatives. Four of the most important decision models discussed in the cognitive psychology literature are the (1) additive/linear model of choice, (2) conjunctive model, (3) additive difference model, and (4) elimination-by-aspects (EBA) model.

The additive model allows the decision maker to choose the best candidate, the one with the greatest aggregated overall value. The conjunctive model assumes that an alternative must possess a certain minimum value on all dimensions in order to be chosen. The additive difference model directly compares just two alternatives at a time and retains only the better one for later comparison in order to reduce memory load. The two alternatives are compared directly on each attribute/dimension to determine a difference, and the results are added together to reach a decision. The selected alternative becomes the new standard against which each of the remaining alternatives is to be compared. The EBA model, like the additive difference model, is also an intradimensional strategy. The decision maker selects the most important dimension/attribute based on its relative importance. The first step of this process eliminates all alternatives for that attribute with values below the cut-off value. This procedure will be repeated until we have all but one of the alternatives.

III.B.2 Choice of Specific Decision Strategy

Another important focal point of behavioral decision theorists' research has been the selection of a specific information processing strategy and the study of the factors that could change the choice of the specific processing strategy. These factors include information availability, time pressure, incomplete data, incommensurable data dimension, information overload, and the decision maker's intention to save time or increase accuracy.

Payne argued that the choice of specific decision strategy is contingent upon two task determinants: number of alternatives available and number of dimensions of information available per alternative. Analysis of the decision maker's information search pattern and verbal protocols suggested that task complexity is the major factor that determines a specific information processing strategy. When dealing with a two-alternative choice task, either additive or additive difference models are used, requiring the same amount of information on each alternative. On the other hand, both the conjunctive and elimination-by-aspects models were proposed to deal with a complex decision task consisting of more than six alternatives. These conjunctive and elimination-by-aspects models are good ways to eliminate some available alternatives quickly so that the amount of information being processed can be reduced in complex decision making.

III.B.3 Cost-benefit (effort-accuracy) Framework

Payne examined cost-benefit framework investigating the effects of task and context variables on decision behavior. According to cost-benefit framework, a possible reason for choosing a specific decision model in a specific task environment is to maximize the expected benefits of a correct decision against the cost of using the process. This theory states that “decision makers trade-off the effort required to make a decision vis-a-vis the accuracy of the outcome.” The strategies used vary widely based on small changes in the task or its environment. The decision maker frequently trades off small losses in accuracy for large savings in effort.

Payne and others further investigated effort and accuracy considerations in choosing information processing strategies, especially under time constraints. They found that under time pressure, several attribute-based heuristics, such as EBA and the lexicographic choice model, were more accurate than a normative procedure, such as expected value maximization. Under the severe time pressure, people focused on a subset of information and changed their information processing strategies.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001428

Behavioral Economics

S. Mullainathan, R.H. Thaler, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Three Bounds of Human Nature

The standard economic model of human behavior includes (at least) three unrealistic traits: unbounded rationality, unbounded willpower, and unbounded selfishness. These three traits are good candidates for modification.

Herbert Simon (1955) was an early critic of modeling economic agents as having unlimited information processing capabilities. He suggested the term ‘bounded rationality’ to describe a more realistic conception of human problem solving capabilities. As stressed by Conlisk (1996), the failure to incorporate bounded rationality into economic models is just bad economics—the equivalent to presuming the existence of a free lunch. Since we have only so much brainpower, and only so much time, we cannot be expected to solve difficult problems optimally. It is eminently ‘rational’ for people to adopt rules of thumb as a way to economize on cognitive faculties. Yet the standard model ignores these bounds and hence the heuristics commonly used. As shown by Kahneman and Tversky (1974), this oversight can be important since sensible heuristics can lead to systematic errors.

Departures from rationality emerge both in judgments (beliefs) and in choice. The ways in which judgment diverges from rationality is long and extensive (Kahneman et al. 1982). Some illustrative examples include overconfidence, optimism, anchoring, extrapolation, and making judgments of frequency or likelihood based on salience (the availability heuristic) or similarity (the representativeness heuristic).

Many of the departures from rational choice are captured by prospect theory (Kahneman and Tversky 1979), a purely descriptive theory of how people make choices under uncertainty (see Starmer (2000) for a review of literature on non-EU theories of choice). Prospect theory is an excellent example of a behavioral economic theory in that its key theoretical components incorporate important features of psychology. Consider three features of the prospect theory value function. (a) It is defined over changes to wealth rather than levels of wealth (as in EU) to incorporate the concept of adaptation. (b) The loss function is steeper than the gain function to incorporate the notion of ‘loss aversion’; the notion that people are more sensitive to decreases in their well being than to increases. (c) Both the gain and loss function display diminishing sensitivity (the gain function is concave, the loss function convex) to reflect experimental findings. To describe fully choices prospect theory often needs to be combined with an understanding of ‘mental accounting’ (Thaler 1985). One needs to understand when individuals faced with separate gambles treat them as separate gains and losses and when they treat them as one, pooling them to produce one gain or loss.

A few examples can illustrate how these concepts are used in real economics contexts. Consider overconfidence. If investors are overconfident in their abilities, they will be willing to make trades even in the absence of true information. This insight helps explain a major anomaly of financial markets. In an efficient market when rationality is common knowledge, there is virtually no trading, but in actual markets there are hundreds of millions of shares traded daily and most professionally managed portfolios are turned over once a year or more. Individual investors also trade a lot: they incur transaction costs and yet the stocks they buy subsequently do worse than the stocks they sell.

An example involving loss aversion and mental accounting is Camerer et al.'s (1997) study of New York City cab drivers. These cab drivers pay a fixed fee to rent their cabs for 12h and then keep all their revenues. They must decide how long to drive each day. A maximizing strategy is to work longer hours on good days (days with high earnings per hour such as rainy days or days with a big convention in town) and to quit early on bad days. However, suppose cabbies set a target earnings level for each day, and treat shortfalls relative to that target as a loss. Then, they will end up quitting early on good days and working longer on bad days, precisely the opposite of the rational strategy. This is exactly what Camerer et al. found in their empirical work.

Having solved for the optimum, Homo economicus is next assumed to choose the optimum. Real humans, even when they know what is best, sometimes fail to choose it for self-control reasons. Most of us at some point have eaten, drunk, or spent too much, and exercised, saved, or worked too little: such is human nature. People (even economists) also procrastinate. We are completing this article after the date on which it was due, and we are confident that we are not the only guilty parties. Although people have these self-control problems, the are at least somewhat aware of them: they join diet plans and buy cigarettes by the pack (because having an entire carton around is too tempting). They also pay more withholding taxes than they need to (in 1997, nearly 90 million tax returns paid an average refund of around $1,300) in order to assure themselves a refund, but then file their taxes near midnight on April 15 (at the post office that is being kept open late to accommodate their fellow procrastinators).

Finally, people are boundedly selfish. Although economic theory does not rule out altruism, as a practical matter economists stress self-interest as the primary motive. For example, the free rider problems widely discussed in economics are predicted to occur because individuals cannot be expected to contribute to the public good unless their private welfare is thus improved. In contrast, people often take selfless actions. In 1993, 73.4 percent of all households gave some money to charity, the average dollar amount being 2.1 percent of household income. Also, 47.7 percent of the population does volunteer work with 4.2h per week being the average time volunteered. Similar selfless behavior is observed in controlled laboratory experiments. Subjects systematically often cooperate in public goods and prisoners dilemma games, and turn down unfair offers in ‘ultimatum’ games.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767022476

The criticality of social and behavioral science in the development and execution of autonomous systems

Lisa Troyer, in Human-Machine Shared Contexts, 2020

7.3 Limitations of cognition and implications for learning systems

One of the most striking challenges for the integrated performance of humans and autonomous systems is to achieve effective team performance, including for future human-machine teams. As noted by Matthews et al. (2016), system resilience is a critical concern. That is, human operators and machines need to achieve a symbiotic relationship that allows machines to adapt to cognitive demands, operator trust, and capabilities. Matthews and his colleagues propose this adaptation requires that autonomous systems must be designed to process the capabilities and intent of the human operators with whom they are interacting. In addition, autonomous systems must recognize and, in some cases, correct for the deficiencies in human cognition. Nobel prize-winning research by both Herbert Simon (1991) and Daniel Kahneman (2003) established limitations in human cognition. Simon famously developed the blueprint for the limits of human rationality in decision-making, which he referred to as “bounded rationality.” By this term, Simon noted that humans were incapable of purely rational decisions due to limited cognitive capacities in perception, memory, retrieval, and processing. These limitations arise, in part, because of the complex interdependencies and tensions that human decision-makers must confront. For instance the inputs that lead to outcomes are vast and highly interdependent as March and Simon (1958, p. 27) state, “…an activity elaborated in response to one particular function or goal may have consequences for other functions or goals.”

Kahneman (1973) drew on this concept of bounded rationality but focused less on the interdependencies that undermine full information processing and more on the biases in human processing, which formed the framework for prospect theory (e.g., Kahneman, Slovic, & Tversky, 1982; Kahneman & Tversky, 1979). This theory asserts that individuals attempt to assess risk in decision alternatives, with respect to potential losses and gains, to choose the outcome that they believe is likely to generate the best payoffs. This theory is in stark contrast to prior theories based on optimal decisions captured by then-dominant utility theories because it takes into account what people actually do as opposed to what the best outcome would be. As Kahneman and his colleagues demonstrate, however, what people actually believe to be the best course of action is affected by a number of cognitive limitations, which lead them to process information heuristically as opposed to using full information. In particular, they identified three heuristics that humans use when perceiving and processing information to determine a course of action. First the availability heuristic refers to the use of information that is easily accessible. For instance infrequent and high-impact events can often be easily brought to mind, leading people to overestimate the likelihood of such an event. This heuristic often leads to risk-aversive behavior (i.e., avoiding those potential negative outcomes). Second the representative heuristic refers to situations in which people use known categories of events, actors, or objects as a comparison case for that situation and fail to note their commonality in the category across the population in the category. This heuristic results in overestimation of how rare the individual, object, or event is, undermining their ability to accurately detect cause and effect. Third, Kahneman et al. identified human tendencies to inaccurately estimate a numerical value, such as the percentage of females in an organization, organizational productivity, or the likelihood of a major disaster, when presented with an initial potential value (i.e., anchoring and adjustment heuristic). They demonstrated that individuals who are provided with an initial anchor in the form of a question (e.g., “Is the likelihood of a tornado in Kansas greater or lesser than 75%?”) tend to only deviate slightly from the anchor (in this example 75%), leading the anchor to essentially provoke a biased estimate that informs an action.

Additional cognitive research shows a variety of biases in how individuals evaluate information, causality, and responsibility. Relevant here is work under the umbrella of attribution theory, including correspondent inference theory (e.g., Jones & Davis, 1965), which documents biases in how humans infer intent from the observation of action. This line of research assesses biases that lead observers to infer whether the behavior of an actor is due to choice, intent, accident, internal dispositions, intent to harm the observer, and/or goal of affecting the observer's behavior. Building on this work, Kelley (1967) proposed a covariation theory of attributions of behavior. This theory proposes that three dimensions of an actor's action determine whether the target of the action attributes it to the actor or the situation—whether it is (1) common among others (consensus), (2) uncommon to the individual in similar situations (distinctiveness), and (3) common to the actor in the same situation (consistency). Attribution processes, which describe the cognitive factors that play into inferring causality and intent, are based on cultural norms. Consequently, they hold high risks of introducing biases in cross-cultural interaction because the norms governing interpretations of people, actions, and situations vary by culture. This cultural bias can lead individuals from one culture to misattribute causality and intent of another actor or group of actors from a different culture.

While there is much more research than the examples cited here on the cognitive capabilities and limitations of human actors, an important point of these examples is that they have been computationally modeled, making them amenable to the development of autonomous systems that can avert the biases that they can introduce to human decision-making. On the surface, that appears to be an important advantage. Yet, there are two important considerations with regard to cognitive limitations that have been described when considering the development of autonomous systems: (1) Human cognitive biases enable learning, especially if there are repeated encounters with the situation and behavior, and (2) human cognitive biases often force a delay between cognition and action that opens the door for additional inputs that may occur between cognition and action (in part due to complex interdependencies that evolve over time and space). This evolution is important because it enables opportunities for adjustment before action is taken. Whether the aim of an autonomous system is to enable learning (by itself or by its human interactants), to allow for all possible interdependencies and inputs, or to simply enable rapid real-time rational decisions, these systems are likely to be optimal if they have the capacity to allow for bias analyses and correct for them (i.e., learn) and to predict, detect, and analyze complex interdependencies.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128205433000079

Experiments on personal information disclosure: Past and future avenues

José Luis Gómez-Barroso, in Telematics and Informatics, 2018

4.2 The study of inconsistent behavior

As seen in the previous section, only a very few of the not so many experiments conducted in the area of online disclosure of personal data explore the inconsistent behavior of users. That means that, in this case, there is a long way ahead. In the next lines, a description of possible heuristics is provided. The “possible” remark should be underlined. It is obvious that their validity cannot be a priori stated. They are proposed based on anecdotal evidence and on the scrutiny of companies’ activities as firms have indeed developed expertise in exploiting behavioral processes and can even uncover, and trigger, consumer frailty at an individual level (Calo, 2014).

According to the classic definition, heuristic principles reduce the arduous task of assessing probabilities and predicting values to simpler judgmental operations; generally, heuristics are rather helpful, but sometimes they lead to systematic and severe errors (Tversky and Kahneman, 1974). In the case of the disclosure of personal information, a number of heuristics may work. The following ones have been already cited in the literature review made in the previous section:

-

Availability heuristic. There is a tendency to judge probability based on the difficulty one has in recalling an event. Public information campaigns or recent news about personal information databases leaks or data misuse can affect exposure, making the risks seem more prevalent.

As seen before, this is the heuristic that so far has received more attention.

-

Use of a reference point. Decisions about privacy are not formed in the vacuum. There is an expectation on what data companies ask and what data are usually disclosed. Equally, there is an expectation of what is “appropriate” to disclose in a social network. The reference point may be influenced by the context.

The existence of reference points is one of the –apparently– most evident heuristics that we all, in a way or another, refer to. The experiment cited in the previous section (Keith et al., 2014) addresses it from a restricted perspective. There is a wide scope for further analysis.

-

Framing effects. The wording of a question influences preferences, in particular when the question refers to sensitive data. The order in which information is asked for is also important as not all the questions about themselves are similarly received by people – once reached the limit of acceptability, people may refuse to continue or even may want to leave. A third effect can arise from the circumstances and justification for data collection, both in the specific and general contexts (in this last regard see Steinfeld, 2017, though conclusions are based on a survey and not on an experiment).

Similarly to the previous case, framing effects seem a priori to have a pervasive influence on disclosure decisions. Again, this is still an unexplored field of research as the previously cited article (Acquisti and Grossklags, 2012) is but one of many possible examples.

-

Adherence to the default option. Rational models suppose that people have well-formed preferences, However, people may have not formed preferences over many of the choices they face. If this is the case, the default option fills the void of preferences. In this regard, the default option can perform much like a reference point.

The effect of opt-in versus opt-out defaults has been already studied but many other experiments about the influence of untouched settings can be imagined.

Other heuristics that might be effective in influencing or directing privacy behavior can be added to the list (a list that is obviously far from exhaustive):

-

Immediate reward/Payment decoupling. “Consumption” (i.e., access to a service obtained after data disclosure, or benefits derived from personalization of applications and services) and “payment” (potential misuse of personal information) are not closely linked as they are separated substantially in time. As a result, we may be in presence of present-biased preferences. An extreme discounting of the future can lead to procrastination behavior when taking actions to defend privacy.

-

Visceral factors. Linked to the aforementioned heuristic, a special class of projection biases happen as a result of emotions and physical drives or feelings. When a person is in a hot state –when a visceral factor is active–, the instantaneous utility function may significantly differ from the expected –regular– utility function.

-

Recency/primacy effect. People update their beliefs quickly when the information –in particular privacy notices– is easy to understand, displaying a recency effect. On the contrary, when the information is complicated and requires real cognitive effort to discern it, the initial beliefs must probably persist (i.e., a primacy effect is present).

On a separate issue, the same comments about the methodology and design of the experiments made on the previous subsection apply whatever the goal of the experiment.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0736585318302107

What is it called when decisions are based on past or similar information or situations?

The availability heuristic is a cognitive bias in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision (Tversky & Kahneman, 1973).

What is biased decision

Cognitive bias – also known as psychological bias – is the tendency to make decisions or to take action in an unknowingly irrational way. For example, you might subconsciously make selective use of data, or you might feel pressured to make a decision by powerful colleagues.

What is decision

A bias is a systematic error in decision-making and thinking. It occurs when people process and interpret information in the world around them. It affects the decisions and judgments that they make. People sometimes confuse cognitive biases with logical fallacies.

What are the cognitive shortcuts used to simplify the decision

These cognitive shortcuts are also known as heuristics. Understanding how we use them in medicine can help us improve practice. Because heuristics simplify difficult decisions, they help us avoid “analysis paralysis” under conditions of uncertainty that demand speed.