Episode 020:

The Science of Changing Course

with Dr. Jennifer Lerner

July 6th, 2022

Listen or subscribe to our podcast on your favorite podcast provider!

Episode description

How can celebrating failure make us more successful? Tenured Harvard Professor and former Chief Decision Scientist in the U.S. Navy, Dr. Jennifer Lerner, joins your host, Executive Director of the Alliance for Decision Education Dr. Joe Sweeney, to discuss why most organizations reward leaders who continue investing time and money into projects that are underperforming and how to break this pattern. She also shares how leaders can terminate an initiative while maintaining the confidence of their team, and how throwing “failure parties” may be the key to success. Jenn also shares a surprising insight about how feeling angry can make us unrealistically optimistic.

Dr. Jennifer Lerner is the Thornton F. Bradshaw Professor of Public Policy, Management and Decision Science at the Harvard Kennedy School. She is the first psychologist in the history of the Harvard Kennedy School to receive tenure. Jenn also holds appointments in Harvard’s Department of Psychology and Institute for Quantitative Social Sciences. In addition to her roles at Harvard, Jenn has taken leave from Harvard to serve as the Navy’s first Chief Decision Scientist, reporting to the U.S. Chief of Naval Operations (2018-2019).

Drawing insights from psychology, economics, and neuroscience, her research examines human judgment and decision-making. Together with colleagues, she developed a theoretical framework that successfully predicts the effects of specific emotions on specific judgment and choice outcomes. Applied widely, the framework has been especially useful in predicting emotions effects on perceptions of risk, economic decisions, and attributions of responsibility. Published in leading scientific journals, and cited over 31,000 times in scholarly publications alone, Jenn’s research also regularly receives coverage in popular media outlets including Good Morning America, National Public Radio, NOVA, The Wall Street Journal, The Washington Post, and The New York Times.

In a White House ceremony, Jenn received the Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the U.S. government to scientists and engineers in early stages of their careers. She has also received the National Science Foundation’s (NSF) Faculty Early Career Development Award and the National Science Foundation’s “Sensational 60” designation.

Jenn serves on a diverse set of boards, including the scientific advisory boards for two corporations in the machine learning and decision-making space, as well as the Faculty Steering Committee for Harvard’s Mind-Brain-Behavior Initiative. Previously, she served for two years on an expert panel within the National Institutes of Health, and for three years as the first behavioral scientist ever appointed to the United States Secretary of the Navy’s Advisory Panel. In this role, she provided input to the Secretary on critical matters faced by the Navy and the Marine Corps.

In 1998, Jenn received her Ph.D. in psychology from the University of California – Berkeley. After a National Institutes of Health postdoctoral fellowship at UCLA, Jennifer joined the Carnegie Mellon University faculty. She served as Assistant Professor of Social and Decision Science, and later the Estella Loomis McCandless Associate Professor of Social and Decision Science. Jennifer joined the Harvard faculty and received tenure in 2007.

Joe: I’m excited to welcome our guest today, Dr. Jennifer Lerner.

Dr. Lerner is a tenured professor at Harvard University with appointments at the Harvard Kennedy School, the Department of Psychology and the Institute for Quantitative Social Sciences. She specializes in the study of decisions with small margins for error such as health, finance and national security decision-making.

An award-winning teacher, she is also the founding faculty director of Harvard’s popular Leadership Decision Making executive education program. From 2018 to 2019, Professor Lerner served as the Navy’s first Chief Decision Scientist, reporting to the U.S. Chief of Naval Operations.

Jenn, welcome to the podcast. Delighted to have you here. The whole team has been excited for this episode. Thank you for coming today.

Jenn: Thank you, Joe. Delighted to be here.

Joe: I was wondering if you could describe what you do, to non-academics?

Jenn: Well, I do two things primarily, teaching and research.

I teach courses for a variety of people ranging from undergraduates at Harvard University to senior level executives. And the main topic that I teach is how to make smart judgments and decisions. Most importantly, I’m interested not even so much in helping the people in the course to make better judgements and decisions, but I’m interested in teaching them how to design decision-making environments so that people in the organizations they lead — or they will eventually lead — can all make better judgements and decisions.

Then, on the research side, I’m interested in the effects of social-structural factors on judgment and decision-making such as: how do different kinds of accountability affect judgment and choice? And my primary line of research is: how do emotions affect judgment and decision-making?

Joe: Thank you for that. Did one lead to the other, [with] the teaching and the research?

Jenn: I knew as early as my PhD program when I was in graduate school that I wanted to be a researcher, and I was pretty intimidated about teaching. Once I actually became a professor, I fell in love with teaching and discovered that I love teaching and research equally.

There’s a really synergistic relationship between the two for me, so questions that arise from teaching help fuel my research, and I can give you a specific example of that. Certainly, questions from research fuel my teaching: I try to make sure that regardless of whether I’m teaching senior executives or undergraduates, PhD students, that everything I teach has an evidence base to it. I’m not a fan of giving my own opinion about things unless there really is not an evidence base.

Joe: You piqued my interest by saying that you could give a story about that. So I’m just going to jump right on that and say, please do.

Jenn: Sure. So one of the well known biases in human judgment and decision-making is the tendency to pay too much attention to sunk costs. So sunk costs, as you know, are investments we’ve made in the past that are unrecoverable. Anyone who’s taken an introductory class in microeconomics probably learns that prior investments should not factor into decisions about future investments, when the investment is unrecoverable. And yet, there is this strong human tendency to want to escalate commitment to anything we’ve invested in, in the past. That could be because we put money into a particular product line, for example, or because we put our own time into mentoring a person or into visiting some place. It can even be as simple as: we bought tickets to a movie, turns out the movie is terrible. And most of us feel like we should stay, even though there would be better ways to spend that time. So that’s the essence of sunk cost bias.

And so for a long time, I’ve been teaching sunk cost bias and how to design strategies to avoid escalating commitment. And especially in teaching some of the senior executives who take the Leadership Decision Making executive program that I run, they started pushing back a lot. And they started saying, “Jenn, you have been spending too much time in academia, not enough time in the so-called ‘real world’ and had you spent more time in the ‘real world,’ you would realize that sunk cost bias is not a bias at all. That in fact, any leader who goes back on something they invested in will suffer deep reputational costs.”

Joe: Mm-hmm.

Jenn: And I used to hear this especially from people who work in diplomacy. So we would commonly have ambassadors in the class who used to take me aside at the break because they’re very diplomatic.

Joe: Nice [laughs].

Jenn: So rather than saying to me right there in class, “Jenn, you fool,” they would take me aside and say, in very diplomatic terms, “Jenn, you fool.” [laughs] So they started articulating how, if they made an agreement on behalf of the country they represent, even if that agreement was absolutely awful and everyone knew it, that going back on it was certainly an end to their career and would cause international disruption.

So, of course I’m simplifying, but we started receiving a lot of pushback. They didn’t even want to hear the word, “bias” associated with sunk cost. And so my doctoral students at the time and I decided to test the speculations that they came up with. So we actually designed a series of experiments led by Charlie Dorison who was a doctoral student at the time. He’s currently a postdoctoral fellow at Northwestern Kellogg School of Management, and the other is Chris Umphres, he was a full-time active duty Air Force officer while getting his PhD in my laboratory. Chris had firsthand knowledge of escalation to sunk costs that he had seen within the Air Force.

We’ve designed a series of experiments to test these hypotheses and sure enough, what we found was that, yes, indeed, as the executive education participants hypothesized, the leaders who backed away from their prior commitments were penalized and were viewed as less competent and less effective. At first we thought, well, okay, so they are disparaged, they receive these negative trait attributions, but if we actually push observers to put their money where their mouth is, they of course will realize that escalating commitment to a failing course of action is not a good idea. And in each of the studies that we did, not only was there a sunk cost, but the sunk cost was not working out, so it was viewed as a failing course of action.

Joe: Wow.

Jenn: And even in those circumstances where we introduced financial incentives using a variety of behavioral economic games these are games like the Trust Game or the Dictator Game. We actually found that observers punished the leaders who backed away from their prior courses of action even when those courses of action were not working out. And they rewarded the leaders who stayed the course and the key explanation for this was that they found them more trustworthy. A really interesting additional finding that we hadn’t expected at all is the fact that the leaders who chose to stay the course rather than back away from a potentially failing course of action, turned out to actually be more trustworthy.

Joe: Whoa.

Jenn: Yeah. Those same leaders ended up returning more money to the other players. And so if you want to summarize these findings, it turns out that the leaders are rewarded on a local level for staying the course even if it looks like that course is not going to be successful. And the problem is that if you zoom up to the organization level, there’s this symbiotic relationship between the decision-makers who are escalating commitment to their prior courses of action and the observers who are rewarding them for doing so, but in the long run, this does not work out well for the organization because the organization needs to spend its money wisely and instead should be calculating only cost and benefits of future investments and making decisions based on that. So they’re probably, on average, staying longer with things than they should, even if it works out locally for the leader and the observers in that leaders’ world.

Joe: Right.

Jenn: The first paper in this line of work is already published in the Journal of Experimental Psychology: General and it’s Dorison et al. Listeners can find that, and the second paper is currently a manuscript that we are finishing up. So it’s altogether a series of seven experiments that all use these financial incentives so that we’re not just studying what people think, but their actual behavior when real money is on the line.

Joe: So the folks that don’t share more [money] back are the ones that are behaving more rationally with respect to sunk cost fallacy. Am I following that right?

Jenn: I would put it the other way and say that the people who tend to escalate commitment are the ones who are more likely to share.

Joe: A game is a model of sorts, right? It’s a toy for us to play with the dials and test our ideas with. And the leaders that you were speaking with in the executive education program, they were saying, “Well, you’re missing important context here. You’re evaluating this one output, but we have more than one output that we’re managing for. We’ve got the success or failure of this project and the sunk cost that might be involved, but we also have our reputation as leaders and how our peers and subordinates and superiors are evaluating us as people who are trustworthy [or not].” Is that fair, or are you saying back to them, “No, no. That’s also captured in the model. You’re missing the point here”?

Jenn: Yeah. So let me say, based on these results, Charlie Dorison has developed a large program of research on cases where monetary outcomes are in tension with reputational outcomes. And it turns out that there are a number of cases — like in this example with sunk costs — where they are in tension with one another. Part of it has to do with levels of analysis. So for the individual leader, they may get both monetary reward and a reputational reward when they escalate commitment, but the broader organization is ultimately going to lose because they’re staying with too many existing investments. So that’s an example of where they’re in tension.

Are you asking, Joe, whether when we use a game theoretical paradigm like the Dictator Game or the Trust Game, whether we’re dropping out too much real world context, is that the question?

Joe: I am. I’m wondering if leaders are perhaps optimizing on multiple constraints instead of just the financial constraint — so in this case, the reputation — and if that’s often being dropped and if that’s why the biases are showing up in the research and the leaders are saying, “No, no, that’s not really there.”

Jenn: Yes. So that’s exactly what we’re finding, is that they’re optimizing on more than just the monetary outcome, or in other words, calculating the expected value based only on future investments. They’re also taking their reputation — as a person who sticks with their word — into consideration. And then interestingly, we can use a fairly stripped down kind of marketplace that we create, in order to test those predictions.

Joe: Beyond teaching and research, you also do some consulting and advising. I’m wondering, when research, like this comes out, when you’re discovering that organizations are at risk by having local level leaders optimize for more than the outcomes of projects — they’re optimizing for their own reputation — do those do organizations then reach out to you, or do you reach out to them and say, “Okay, this is a problem and here’s what to do about it.” And is there something that we can do about it?

Jenn: There’s definitely something you can do about it [laughs].

Joe: Okay [laughs].

Jenn: Yeah. And so for example, I’ve been teaching the course that runs within the Air Force for newly promoted two-star generals and their civilian equivalents. And the former vice chief of staff of the Air Force is a former executive education participant of mine and so when he became vice chief, which is the number two person in the Air Force, he invited me to start teaching Decision Science to these generals and their civilian equivalents and, and I’ve been doing it ever since.

We certainly cover sunk cost bias and we talk explicitly about the differences between these reputational concerns and the purely strict economic concerns. And I have 11 different strategies for avoiding falling victim to sunk cost bias.

Joe: Oh, okay!

Jenn: The reason there’s 11 [laughs] is because this is one of the most profound, most robust biases that we all fall victim to, and none of these strategies is the silver bullet. They really need to work in combination with each other, so it’s helpful to have done this research where we can share with them: look, you are going to get rewarded reputationally [laughs] as the person who’s like, “yeah, we’ll double down, we’ll stick it out,” you know, and to bring that to light and to say, “but that doesn’t mean you should necessarily stay with it, you’ve got to consider this tension.”

So one of the things that seems most promising is to design ahead of time preset intervals in the future where you will say to yourself, “Okay, by say three months from now, we have to be at this level of performance and if we’re not, we have a fixed contingency plan for what we’re going to do instead.” And when we don’t have those kinds of preset intervals with fixed contingency plans for change of course of action we get to that three month point and we say to ourselves, “You know what, we’re not where we really hoped to be, but it’s because we’re learning so much and we should really keep doing this just with more effort, or with a slight tweak or …” we can all come up with reasons.

Joe: Right.

Jenn: So I can even do this as a researcher: we ran the study, the results don’t support the hypothesis and instead of saying, “well, the hypothesis could be wrong,” I think to myself, “we did not operationalize the variable in the right way. Really what we need to do is…” [laughs]

Joe: Right.

Jenn: But if we can instead write in fixed contingency plans where we “tie ourselves to the mast” to use a Ulysses metaphor, we can build in appropriate degrees of freedom, so latitude to try different things, but the important thing is to build that in ahead of time, not after we’ve put in all this effort and just by our own human natures, we are going to continue no matter what. Typically, something has to fail miserably before we will give up on it.

One of the things that biotech industries near me in the Boston area have started to do, and this is something that in Decision Science we’re trying to get everyone to do, is to focus more on the decision process than the decision outcome.

Joe: Yes.

Jenn: And so biotech companies are saying, “Look, there’s so much money invested in these products. We can’t afford to spend a day longer on something that isn’t going to work than we should. And so we’re going to heavily incentivize excellent analytic process, evaluating at these preset intervals whether we are reaching the goals, whether the experiments are coming out the way we expected they would, we’re going to reward that process so much that we will even throw a party for any product that is failing.” And so they’ve started having failure parties, where they’re essentially celebrating a good analysis and celebrating people who are going against the natural human tendency to just keep saying, “We can do it. We can do it. I know we can!” I think one of the things that an organization can do is to say right off the bat, we care more about your analysis than about you trying to say, “No matter what it takes, we’re going to develop this particular product.”


“Sunk cost bias … is one of the most profound, most robust biases that we all fall victim to … by our own human natures, we are going to continue no matter what. Typically, something has to fail miserably before we will give up on it. ”  — Dr. Jennifer Lerner


Joe: Do you think that strategy works because it protects the reputation of the leader, and now they’re still are seen as trustworthy because they followed one of the things that they said they would do, or, is it that this is overriding trustworthiness as an important goal and it’s really just pre-committing people to a course of action that will prevent them from falling for that reputation risk.

Jenn: So I think that trust is always essential and always part of any relationship with a leader. It’s the lubricant that makes organizations work. Even in relatively flat organizations that don’t have a lot of hierarchy, that trust is crucial. So I think that both mechanisms are probably at play. So, one is that a leader can announce him or herself ahead of time, “this is what we’re aiming for: we’re aiming for great analysis [and] great analytic process here.” Then, that has the dual benefit of good analytics so they get where they need to be faster, they’ve essentially accelerated organizational learning, and then also by signaling what they value, they have helped to build trust with others in the organization. And so if I’m a leader in an organization and I say ahead of time, “This is what we’re aiming for: we’d love for this initiative to work. But what we really need to do is value the analysis of whether it’s working the most, because we want to quickly pivot to something else if it’s not working.”

Joe: Yeah.


“If I’m a leader in an organization … I [should] say ahead of time … ‘We’d love for this initiative to work. But what we really need to do is value the analysis of whether it’s working the most, because we want to quickly pivot to something else if it’s not working.’”  — Dr. Jennifer Lerner


Jenn: And so if you don’t signal that ahead of time, then people put in nights and weekends, so committed to making project X work, and then if project X doesn’t work, they feel like everything failed, and you called it off, and now they’re blindsided. But if instead you say, “Yes, we are going to work nights and weekends, we’re hoping project X works, but the minute we detect it’s not, when we see that the benefits of switching outweigh the cost of proceeding, then what we really celebrate is the process, the analysis, and we’re going to switch to project Y.” And then when everyone knows that ahead of time, it gives the leader the latitude to do that kind of switching.

There’s many other strategies, as I mentioned, 11 or so [laughs] that all have their pros and cons. I’ll touch on one which I think many people use that I don’t have a lot of faith in.

Joe: Okay [laughs]

Jenn: And this is going to be an unpopular statement, but I did serve for several years on an advisory board advising the secretary of the Navy who oversees the Navy and the Marine Corps before I went on to be the Chief Decision Scientist. So I was originally on the civilian side then I switched over to the military side. And none of these things I ever expected to do, by the way, but I’ve taught so many national and international security leaders that they asked me to do this. What I saw, in those roles is that a lot of times, consultants are brought in and the consultant is asked: “What do you think we should do on this project?” And unfortunately, there is an inherent conflict of interest that is really hard to get around, especially with the Department of Defense [which is that] consultants really wanna be hired again and again and again  by the Department of Defense. So it’s hard to say to the Department of Defense senior leader, “You should really back off of this sunk cost and pivot to something else,” because that would obviously be an unpopular thing.

And so I would say that bringing in a consultant comes with the idea that you’re getting a non-stakeholder and that, “wow, this person will really be an outsider who can tell us what to do.” But unless you say to that outsider, “I will never hire you again.” They’re not really a non-stakeholder. They have some interest in saying something to you that will make you happy. So if you do bring in someone who’s a consultant with the idea that they might be a non-stakeholder, you might want to say to them, “I have to tell you this is a one-time engagement. I cannot hire you again.” [laughs] And the other thing is, you might want to consider not telling them what you’ve already invested in, and try to blind them to that because really knowing what you’ve already invested in most people are likely to say “This is how you can try to make that prior investment work.”

Joe: So I appreciate that that strategy isn’t one you’ve got a lot of faith in, is there a second that you do have a decent amount of faith in that you could share?

Jenn: So one that I would give is: divide your team, your advisory team, into two groups, and share with one of those groups the information on prior investment, which is crucial in order to take reputational concerns into consideration. And then for the other team, don’t tell them anything about prior investments. They’re essentially putting themselves behind a veil, and they’re making the decision based more purely on future investments and future outcomes. Then, if those two teams come back with different recommendations, you as a leader are able to balance the two. You have one that more clearly matches what neoclassical economics would dictate is the rational procedure, and you have another that takes into consideration more social reputational concerns in your organizational networks.

Joe: It’d be interesting to track that gap over time in an organization and see how much reputational value is showing up in decision-making.

Jenn: Yeah. And I could go on with other recommendations as well.

Joe: If you’d like to. Sure!

Jenn: Sure. So another one this is not a modern discovery of Decision Science but one that a lot of Decision Science backs up: surround yourself with people who are willing to speak truth to power, and incentivize them for speaking truth to power.

I actually was in a situation like that. When I served as the Chief Decision Scientist, I reported directly to the Chief of Naval Operations, Admiral John Richardson, he’s one of the Joint Chiefs. The Joint Chiefs are the service heads for each of the branches of the military that report directly to the President. So he allowed me to give my advice on a lot of business, and I can’t go into too much detail about what I saw [laughs] …


“Surround yourself with people who are willing to speak truth to power,
and incentivize them for [doing so].’” 
— Dr. Jennifer Lerner


Joe: Gee, why not? [laughs]

Jenn: … But we did work together for example, on a Navy strategic plan, and I was in a position where — without major repercussions — I could tell him anything that I didn’t agree with and I could tell him the scientific literature at a time when it was relevant to what he was considering, and I could calculate analyses for him. What was wonderful is I knew I could go back to my tenured position at Harvard so I had essentially nothing to lose in saying unpopular positions to him.

We actually used to joke about it because I would tell him about how at the tenured faculty meetings at Harvard, everyone feels like they can say anything. And one time I joked with him and said, “Yeah, we’re all savages.” And so he picked that up and so he used to say, “Bring the savage!” so then I would say to him directly, “That’s a bad idea.” And we had established those were the ground rules.

So if someone in a position of leadership can find various ways — obviously ours was a unique situation — to have people empowered to say to you, “That’s a bad idea and here’s why, and let me show you the analysis, or let me show you the evidence,” that kind of thing, then you are less likely to be surrounded by others who are sycophants. And, you know, it’s as old as ancient times …

Joe: It’s the court jester.

Jenn: Exactly!

Joe: I very much wanted to get to the other area of your research … your work around anger and fear and that they affect decision-making differently. I think our audience will be fascinated by it. Maybe you could just talk about that for a minute, what you’ve learned there and what we should know.

Jenn: Sure. So as you can see I love talking about judgment and decision-making! [laughs]

Joe: That’s great [laughs].

Jenn: So there’s a lot of myths about emotion and decision-making.

Joe: Okay.

Jenn: And you know, partly these myths exist because in the 20th century — especially the early part of the 20th century — there was this idea that emotion is a not scientific topic, we should not study it. And Harvard’s own B.F. Skinner, one of his famous quotes is, “Emotion is the fictional cause to which we attribute behavior.” And so he was obviously focused on stimulus and response relationships and he signaled to everyone, “you’re silly if you think emotion matters.” Then around the latter half of the 20th century, people said, “Oh, silly us. Clearly, it’s not just stimulus and response. There’s something in-between, there’s stimulus, thinking and then response,” and that led to the cognitive science revolution. So it wasn’t until the end of the 20th century that people started to realize, “Wait a minute here, it’s not just stimulus, cognition, behavior, there’s also emotions!”

Charles Darwin knew a lot about emotion and even wrote a book called The Expression of Emotion in Man and Animals that went out of print, you couldn’t even buy it until 1998. And some important thinkers like the late Nobel Laureate Herb Simon started drawing attention to emotion toward the end of the 20th century. As early as 1983 in fact, he published a paper where he referred to emotion as “cognitive interrupts” and he said that emotions essentially get our attention and tell us what to pay attention to.

And that’s true, a lot of research holds that up. Emotions do more than that though. So again, it’s because the science has been slow to develop that a lot of these myths exist. But one of the main myths is that if you’re in a negative mood, so you’re feeling negative emotions, anger and fear are both negative emotions, that you’ll tend to have a pessimistic outlook, so negative mood leads to negative outlook.

And there are certainly studies that have shown that negative mood [equals] negative outlook, but based on understanding a lot of the cognitive science literature in the judgment and decision-making field about what predicts risk, we know the underlying dimensions that predict perceptions of risk are whether people perceive a sense of uncertainty, and in addition, whether they have a sense of control. And this relates to classic research done by Paul Slovic as well as by Baruch Fischhoff and colleagues on the drivers of risk.

It just so happens that fear and anger also are associated with cognitive appraisals that relate to perceptions of certainty and perceptions of controllability, but they differ. So fear is almost defined by a sense of uncertainty, when we’re afraid, we don’t know what’s going to happen.

Joe: Mm-hmm.

Jenn: And anger is very different when it comes to a sense of uncertainty. Angry people, whether they know what’s going to happen or not, they feel they know. Anger is associated with certainty, and so they’re very different on that dimension. They also differ a bit on the dimension of control, so when people are angry, they feel some human is in control, it may not be them, but somebody did something wrong. And if it were a situation, for example, if there was some natural disaster, you might be sad, but if you think somebody did something wrong, that leads to anger. So it’s associated with perceptions of individual control rather than situational control.

Joe: Yeah.

Jenn: Fear is less so. So with fear, there might be situational or individual control. So in other words, fear and anger differ on these two key cognitive dimensions that drive perceptions of risk. And so once I put together this understanding of fear and anger with what is known from the cognitive science literature about drivers of risk, it was pretty straightforward for me to predict that fear and anger would have different effects on the perceptions of risk and also on risk taking, and that is what we’ve found.

Whereas people in a fearful state tend to perceive things as more risky. So even if that fear arose in a prior situation unrelated to what they’re considering now, that fear will lead people to perceive more risk and it will lead them to be relatively risk averse. Anger, by contrast, because it’s associated with a sense of certainty and individual control … angry people tend to perceive less risk. And, in fact, they can be unrealistically optimistic about their own chances. This is a very important detail: it’s not that angry people think that only good things are going to happen; angry people think that whatever happens, they will prevail.


“Fear will lead people to perceive more risk and it will lead them to be relatively risk averse. [By contrast] … angry people tend to perceive less risk. … It’s not that angry people think that only good things are going to happen; angry people think that whatever happens, they will prevail.” 
— Dr. Jennifer Lerner


We’ve tested this over a number of experiments over the years both in field studies and in laboratory studies, and it seems to be a very reliable phenomenon. It’s also one that has been replicated cross-culturally. I’m very grateful to people who have replicated it since I am somewhat limited in my ability to translate into lots of different languages but it’s been replicated in most major countries around the world.

Joe: When I was reading some of your research and came across this topic, not only was I fascinated by it, but I just did up a quick little two-by-two matrix with those two axes and just started dropping the different media brands into the different quadrants. What’s the emotional valence that they’re trying to generate in their audience most evenings? Where do they go as far as anger and fear and certainty and uncertainty?

Jenn: Yeah. It’s been shown now that emotion travels faster on the internet than neutral things. And so absolutely, people are making money on emotion every day. And the more emotionally evocative the better, in terms of getting social media attention. And in fact companies — which I’m not going to name here — have quantified exactly just how much emotion leads to how many more hits. And so this is a real problem in terms of combating misinformation and disinformation and it leads to a tendency to exaggerate and inflame both fear and anger.

Joe: Yeah. I agree. It’s actually a big part of what I’m hoping will improve as we bring Decision Education into schools across the country. As more people learn about how they’re being manipulated with regard to their biases or just even their normal cognition, that we’ll begin to mitigate it.


”… a big part of what I’m hoping will improve as we bring Decision Education into schools across the country [is that] as more people learn about how they’re being manipulated with regard to their biases or just even their normal cognition, that we’ll begin to mitigate it.”  — Dr. Joe Sweeney


Jenn: Absolutely. Let me say one important point on this though: the absence of emotion is not the goal.

Joe: No. Right, right.

Jenn: So emotions are very valuable for judgment and decision-making. And if anyone doubts that, just think to yourself … If you’re not married and you’re contemplating [getting married] one day, or if you are married you might remember, it’s a good idea to consult your feelings before you commit to spend the rest of your life with someone. It’s a good idea to consult your feelings about many things and there’s an important distinction to be made between integral emotion and incidental emotion. Integral emotion are feelings we have about the judgment or decision at hand, So if you’re trying to make a decision about a risky stock investment, your own ability to handle anxiety, for example, and the uncertainty associated with a high-risk stock, matters for whether you want to invest in that stock or not, some people love that degree of volatility, others don’t.

And certainly in personal decisions you know, your integral effect is very key. Incidental effect or emotion you want to be wary of — when your feelings from one event carry over to color [your decision-making process in another circumstance] — and then exaggerated levels of emotion is another thing to be wary of. But we never want to say that the goal is to drive out emotions. We have emotions for mainly adaptive reasons.

Joe: When you think about teaching about these skills and dispositions, what we’ve learned about decision-making, to the next generation, what do you imagine will look different in society when we succeed in our mission to ensure Decision Education is part of every student’s learning experience?

Jenn: So this is an enormous goal that I think about almost every day, it’s that important. All of us need to be able to think clearly and systematically about uncertainty — otherwise known as probability — just to be able to live our lives. For example, in order to know: should I take this medication? Should I drive at night in the rain? All of these decisions boil down to estimates of uncertainty, and to be able to make smart, informed decisions about them, not only makes life easier, but also increases your chance of being successful and healthy and happy in life. It baffles me that we spend so much time teaching calculus in high school relative to the amount of time that we’re teaching judgment and decision-making, firm understanding of uncertainty, simple kinds of understanding of statistics, and lots of different heuristics for making decisions.


“All of us need to be able to think clearly and systematically about uncertainty — otherwise known as probability — just to be able to live our lives. For example, in order to know ‘Should I take this medication?’ or ‘Should I drive at night in the rain?’ All of these decisions boil down to estimates of uncertainty, and to be able to make smart, informed decisions about them, not only makes life easier, but also increases your chance of being successful and healthy and happy in life.”  — Dr. Jennifer Lerner


I’ll give you one example. Here’s one I had this morning. It’s a little bit of a strange one, but my friend had walked her dog in a state park. The dog, by mistake, was eating stuff on the ground, and ate some marijuana because apparently somebody had brought marijuana to the state park. She had to bring her dog to the hospital and at the hospital, at the vet hospital, the vet said, “We’ve had a 300% increase in dogs coming in with marijuana poisoning.” So that at first might sound horrible, right? Like, oh my God, no one should take their dog to the state park!


“It baffles me that we spend so much time teaching calculus in high school relative to the amount of time that we’re teaching judgment and decision-making, firm understanding of uncertainty, simple kinds of understanding of statistics, and lots of different heuristics for making decisions.” 
— Dr. Jennifer Lerner


Joe: Right.

Jenn: The problem is that we all should learn while we’re in high school that we need to always consider not just absolute risk, but relative risk and not just relative risk, but absolute risk. So 300% increase sounds terrible, but what if it only ever happened once in five years of all the people who brought their dog? if the absolute level is really low to begin with, then the relative increase, 300%, doesn’t mean much. And that’s a silly example, but this actually has real consequences. For example, there was a new birth control development in the UK where the public health service actually released only the relative increase of risk associated with this. The problem was that then people didn’t use it and there was an associated increase in pregnancies. And this is a case that’s written up in the risk communication literature.

So that’s the kind of thing — that we need to always consider absolute risk when we consider relative risk — that can be taught in high school and that would make so many improvements in everyday life for people when they go to the doctor, when they consider taking a medicine, when they consider lots of different things. So there’s relatively simple, straightforward things like that, and I could name 10 more off the top of my head that we could teach very straightforwardly and would create better health, more prosperity, and more happiness in life if we did so.

Joe: Well, we just completed the first draft of our learning standards for Decision Education. So the outcomes of what a student should know and be able to do at each of the developmental bands K-12, if you want to send us the things you think we should be including, I would be delighted to receive it.

Jenn, I want to thank you so much for coming on the show. If listeners want to go online and learn more about your work or follow you on social media, where should they start?

Jenn: Okay. So I’m very easy to find. So if you type in Jennifer Lerner and Harvard, you’ll get a bunch of hits. Sadly for me there’s a contrast effect, if you start to type Jennifer L, you get Jennifer Lopez first.

Joe: She’s on the show next week. Yeah. [laughs].

Jenn: … and then you get Jennifer Lawrence and then you get me, then you get me, the nerd. And so it’s, like, movie star, movie star, and then professor. [laughs] So it’s an unfortunate contrast effect! But I am easy to find and I have a website, jenniferlerner.com. I also have a Harvard site. I should say that I am really busy with teaching research and helping organizations like our Air Force that I don’t have much social media presence. I don’t know how people find time for that, but I have not [laughs] so I’m on LinkedIn and that’s it.

Joe: All right. And for any books, articles, et cetera, I’ve mentioned today, check out the show notes on the Alliance site where you can also find a transcript of today’s conversation. Jenn, thank you so very much. I hope you’ll come back sometime.

Jenn: Happy to. Thanks!

Share this episode to your favorite platform!

Check out our other latest episodes

  • Episode 029:

    Changing Minds in a Polarized World

    with David McRaney

    Why do people sometimes become more entrenched in their beliefs when they are challenged? In this episode, David McRaney, science journalist and creator of the [...]

  • Episode 028:

    Rethinking the Workplace

    with Dr. Adam Grant

    Can giving advice actually be more valuable than receiving it? In this episode, Dr. Adam Grant, organizational psychologist and world-renowned author, joins host Annie Duke, [...]

Stay informed and join our mailing list