Episode 017:

Decision Scope, Artificial Intelligence, and Magic Spells

with Cassie Kozyrkov

May 25th, 2022

Listen or subscribe to our podcast on your favorite podcast provider!

Episode description

How do the world’s most influential technology companies think about decision-making? Google’s Chief Decision Scientist, Cassie Kozyrkov, joins your host, Dr. Joe Sweeney, Executive Director of the Alliance for Decision Education, to talk about the science of decision-making in data-driven organizations, the pitfalls and possibilities for incorporating AI into the decision-making process, and how computer programming is like a cross between LEGOs and magic spells. Cassie and Joe discuss her journey from logging gemstones in Microsoft Excel as a child to founding the field of decision intelligence at Google, and the profound impact that Decision Education will have on our children’s futures. Cassie also shares critical advice about asking ourselves the right question, in order to make the most rational decision.

As Chief Decision Scientist at Google Cloud, Cassie Kozyrkov advises leadership teams on decision process, AI strategy, and building data-driven organizations. She works to democratize statistical thinking and machine learning so that everyone – Google, its customers, the world! – can harness the beauty and power of data. Cassie is the force behind bringing the practice of decision intelligence to Google and she has personally trained over 15,000 Googlers in machine learning, statistics, and data-driven decision-making.

Before her current role, she served in Google’s Office of the CTO as Chief Data Scientist. Prior to joining Google, Cassie worked as a data scientist and consultant.

Cassie holds degrees in mathematical statistics, economics, psychology, and cognitive neuroscience. When she’s not working, you’re most likely to find Cassie at the theater, in an art museum, exploring the world, or curled up with a good novel.

Joe: I’m excited to welcome our guest today, Cassie Kozyrkov. As Chief Decision Scientist at Google, Cassie advises leadership teams on decision process, AI strategy, and building data driven organizations. She is the innovator behind bringing the practice of decision intelligence to Google, personally training over 15,000 Googlers. Prior to joining Google, Cassie worked as a data scientist and consultant. She holds degrees in mathematical statistics, economics, psychology, and neur

Cassie, welcome. I’m thrilled to have you. I wonder if you could just start by telling us a little bit about yourself: how did you get to be where you are and doing the kind of thinking and work that you’re doing now?

Cassie: Well, I am so delighted to be here. Thank you for having me, Joe. And well, goodness, how far back do you want me to start?

Joe: Honestly, childhood.

Cassie: Well, that is in fact the origin point. [laughs] I was a pretty weird kid. So at about maybe eight or nine years old, I discovered Microsoft Excel [laughs] and unlike other kids, I thought this was the most beautiful thing in the universe. Other kids were playing outside, climbing trees. Here I was, putting my gemstone collection information into an Excel spreadsheet. And at some point the gemstones were for the spreadsheet and not the other way around. I’d get excited to get another kind that would have a different hardness that was not already represented in the spreadsheets. It’s so much fun.

So there’s just something about my brain that loves data. I just think it’s so pretty. But as I matured a little bit I started thinking about what’s actually useful and important, what should I do with my life? And I realized that while data are beautiful, it seems to be that decisions and actions are what’s important because it’s through our actions and our decisions that we impact the world around us. Because, you know, if a data point falls in a forest and no one knows about it, no one does anything about it, there’s no action taken, does it matter? I don’t think so.

And likewise with opinions. If you have an opinion and it doesn’t relate to action in any way, what’s the point of the opinion? So I started looking at everything through the decision lens. So if we’re going to use data, how does it somehow involve itself with decision-making down the line? And when we think about assessing the quality of our knowledge or our opinions, it has to also tie into the actions that we are in danger of taking as a result of holding those opinions.

So then I also realized that in order to study what was important, I had to go down the decision sciences track. And so I was always balancing both. I was always doing data science and decision science and you know, people ask me , “Oh, when your title changed from the Chief Data Scientist title in the CTO office to Chief Decision Scientist, does that mean that you left data?” I never left data.

I was always in both, except for that tiny little part of my childhood, where I was playing with data for data’s sake. My entire professional career, it was always about both, but the reason that I hold the decision title now is that I felt it was important to really champion decision-making, the decision perspective in data science. So I’m absolutely a data scientist, but I’m quite impatient with any kind of data science that doesn’t that isn’t motivated by decision-making.


“While data are beautiful, it seems to be that decisions and actions are what’s important because [that’s how] we impact the world around us… The reason that I hold the [Chief Decision Scientist] title now is that I felt it was important to really champion the decision-making perspective in data science.”  — Cassie Kozyrkov


Joe: I appreciate that. I know that about you from our earlier conversations and I’d love to unpack it a little bit. So if you go back to that young girl … So you’re collecting these gemstones. When you say you loved data and you loved the spreadsheets, do you remember: what was it you liked? Was it that that was organized, that it was numerical? What was it about that was exciting?

Cassie: See,  I don’t have to remember because I know. [laughs]

Joe: Okay. Good!

Cassie: Because, quietly, when people think I’m doing useful things, sometimes I chill out with a glass of wine and some data entry. You know, that now has nothing to do with my professional role. I obviously don’t professionally enter data, but I don’t often admit this, but sometimes just to, you know, put some order onto the chaos that is the universe it’s fun to condense chaotic information and tidy it up, put structure on it. And so there’s just something so relaxing about that.

I mean, I’m sure it’s part of the reason that other people play games like 2048 or Candy Crush or something. To me entering data feels kind of like that. In fact, a lot of the various mathematical activities feel to me like video games. So there are very few video games that I end up feeling are a good use of my time [laughs] because usually I’m like, “Ah, here I am in this video game, this is exactly the same activity as, for example, programming in Python. Why don’t I just go do that other thing that’s actually useful?” [laughs] And they feel the same to me.

You just relax, you want to put some structure on the universe. That’s what it’s about. And so now my spreadsheets tend to be about my own personal biometrics that no one’s going to see. Stuff like how many hours I’ve slept. Now I’m looking at modeling my own deep sleep, you know, little side projects that are fun.

Joe: Yeah. I remember when I first started playing with spreadsheets, I started applying order to all of it. But it was when you started just playing with the simple formulas and started messing around with even conditional formatting things, [I] began to get a little taste of what might be possible with programming. And I’m wondering since you mentioned Python, I’m wondering if part of the story is as you begin to program, and you begin to think about decisions and inputs and outputs, if that’s a connecting point between them, or was there a particular book about decision-making or an experience with a challenging decision? Where does the interest in decision-making begin to show up?

Cassie: Ooh. That is actually not a question that anyone has thought to ask. So I think you’re going to get a unique response here on this podcast. And the truth there is that I had some friends when I was in high school, I used to hang out with older kids. I was in high school, they were in college. And just by random chance one of them ended up pursuing economics. And it felt to me like maybe there’s something a little more important and interesting going on in that content than in any other content I’m being taught.

Yeah, I can be good at mathematics sure, but what’s it for? Whereas economics, as the study of scarcity, as the study of utility, human needs and doing your best under extreme constraints, that seemed like, “Wow, that’s something real.” So then it got into my head I should study economics and then have a safe major like statistics, which sounds insane to many people who hate statistics. And I was like, “Yeah, statistics is data. This is the easiest thing on earth. How hard could this be?” [laughs] And then I found out that statistics is actually philosophy. And so there was, there was a small wobble where I got some humility talked to me, [laughs] but once I embraced it then I really continued to love it.

Joe: I had no idea that we had that shared experience. So when I decided to get involved in education and leave the software industry for what I thought would just be a couple of years, I began teaching math. The school where I was teaching said, “Hey, could you help us out? Go get your graduate degree in mathematics and run our math department for a bit?” And I thought, “Sure, why not?” And then when I finished that, I thought, here’s this set of skills that a lot of people think of as hard, that I’m enjoying but it has very little applicability to high school mathematics and certainly none to running a math department.

And at the same time I came across the book, The Worldly Philosophers, if you ever came across that one, or New Ideas from Dead Economists is another one. And it’s the same thing, I began realizing, “Wait a second, there is a field that’s about how we allocate the goods and services and resources to our society, who produces, how it gets produced, and who benefits from it. There’s a whole field here.” And one of the barriers to entry is mathematics or statistics for people. A lot of people who want to be economists, that’s the thing that stops them from being able to pursue it, and I’ve already got that handled. So, why not try to apply to that?

And I ended up doing another degree just in economics for the same reason. I didn’t realize that we had that sort of shared experience of saying, “Wait a second, there’s something over here that’s more concrete and about the world and people’s lived experiences.” So did you immediately start running into decision-making in your economics studies?

Cassie: I wouldn’t say immediately, but so I transferred to the University of Chicago in undergraduate, but I started in South Africa. I started before my 16th birthday, so 15 turning 16 as an impressionable young person. I signed up for an economics major. And the way that we do things over there is if you want to study economics, then you go to the department of commerce and in the university and then get a bachelor of commerce.

Your first year courses are going to be business management, organizational psychology, accounting, twice. That was horrible. I could really do without that! But macroeconomics, microeconomics, operations research. So you still got some statsy stuff, basic stats, which I found quite unchallenging compared to the real four majors stats that I was taking alongside it. It was this mix of that, if you put it together, had some decision-making orientation, and then of course there are all the macro courses. And for me, there was an interesting signal that I had zero patience for the macroeconomics courses, even though I wanted to deal with important stuff.

Joe: Yes.

Cassie: I think a standup comedian, a standup economist, he has this bit where he paraphrases something that Paul Samuelson said, it’s: you can translate macroeconomics in its entirety as blah, blah, blah, as proof to that I need only to remind you that macro economists have correctly predicted nine out of the last five recessions.

Joe: That’s right. [laughs]

Cassie: So that’s the joke there. And I really love that sketch, even though it’s a little cruel. I’ve got slightly more respect for macro than that, but my feeling wasn’t very macroeconomics positive. Whereas in microeconomics that comes down to decision-making by individuals, by groups and that was fascinating. But where I really got the kind of the bug was with putting psychology together with economics. So whenever there was some intersection between psychology and economics, now we’re talking.

Joe: Yeah. I remember thinking about when we were starting with microeconomics, the assumptions that went sailing by that we weren’t supposed to challenge, like “people are rational actors.”

Cassie: Mm-hmm.

Joe: And I remember just seeing it go by and thought, “Well, I’m not sure I’m ready to accept that!” And you know, then you come across something like Ariely’s work on Predictably Irrational, or obviously Kahneman and Tversky’s work on Thinking, Fast and Slow and you’re just like, “Okay, [laughs] we know a whole lot out about humans that tell us they don’t actually behave this way. So why do our models suggest that they do? And then how can the decisions that we’re going to execute possibly end up being correct, if we’re basing our models and these assumptions we already know aren’t correct.”

Joe: So, okay. You’ve got stats going on, you’ve got the economics going on and I’m assuming you’re doing some programming at this point or not yet?

Cassie: Yes. I still occasionally have this strange mental block about whether or not I do in fact do programming, because it was so completely self-taught that I managed to go several years before anyone identified for me that what I was doing “to just make the computer do stuff,” is called programming.

Joe: Right.

Cassie: If I’m not careful, I still think, “Oh, software engineers, they’ve got some kind of magic that they know about that I don’t know.” Yes, they’ve done it professionally, that’s the difference. I’ve never been a professional software engineer myself, but I’ve been writing code my whole life. It’s a funny thing.

So where I actually got started was Excel spreadsheets, right? I realized that you could record macros, and then not only could you record the macros, but you could go and edit the macros… and she was off! That was the most amazing thing! [laughs] So that was my early teens, I think.

Joe: Yeah, and it does feel like spells the first time, at least for me, anyway, the first time I started writing code — and it was simple stuff — it felt like you were writing magic spells.

Cassie: Exactly.

Joe: When the world just started behaving the way that you wanted it to and often it didn’t work. I loved the feedback loop of: if it didn’t compile — I’m pretty old — if it didn’t compile, that was on you. There was something wrong with your spell, there’s something wrong with your code.

And later, if you didn’t get the results you wanted, like the debugging process, I loved the reliability of it. It was going to do what it was supposed to do. You may have given it garbage or you may have designed it incorrectly, but it was doing what you asked. You just probably didn’t form your question correctly or form your recipe correctly, or your spell, or however you want to think about it. I loved that feedback loop!

Cassie: Yeah. I was going to say, I’m sure that as a teacher you’re quite keenly aware of how much easier it is to program machines than to program humans, right? [laughs] How much easier it is to give a set of instructions, to teach something to a machine, than with humans. It just takes so much more finesse. So yeah, absolutely. And likewise, I’ve always said that programming is a cross between playing legos and magic spells. Like if someone declares that they wish they could do magic, just learn programming. It is the same thing. It really is!

Joe: Yeah, and [it’s] becoming just more so as we go forward. The more information communications technology that happens in the world, the more ubiquitous computing gets, the more true that statement is that the world is getting written down in algorithms.

Okay. So there’s your journey thus far. And at some point you decide, this is what I’m going to do professionally. And I think that this is “data scientist.” Is that right? Or is it “consultant” first?

Cassie: [laughs] Now we’re delving into the truth here, aren’t we?

So throughout my early years I was working while studying. And the primary thing was, you know, to be able to eat. So I did what I could do. And the two things that had demand were teaching. So I ended up tutoring. I ended up teaching some high school classes, substitute teaching. And so I kind of did that throughout college. And then the other one was data analysis. Starting out — this is terrible — analyzing the data in PhD projects, for PhD students who didn’t know how to analyze their own data. And I guess that’s a little bit of a murky area. I claim the innocence of being 17 in the ethics right there, but started doing that. That was good money.

And then consulting, doing statistical consulting, having a variety of business clients, just kind of helping out. So I was always doing stats stuff, and I’m also super impatient with situations where there’s inefficiency. A friend and I were joking that if I got kidnapped, I would probably be in the back telling the kidnappers to turn left not right, because it’s faster and better for avoiding the police! I’ve got that personality, [laughs] “I don’t really care, but do it better!”


“I’m also super impatient with situations where there’s inefficiency. A friend and I were joking that if I got kidnapped, I would probably be in the back telling the kidnappers to turn left, not right, because it’s faster and better for avoiding the police!”  — Cassie Kozyrkov


So I found that no matter what I did, there would be some kind of information application, some kind of data thing. And there would be some way that you could improve what you were trying to do. And then when I would propose that people would get pretty excited and be like, “Oh, you know, this has a name!”  and “come consult for us.” So that’s really how I fell into that stuff. So the statistics side was always the easy thing that I fell back into. And then I also worked as an economic analyst for a while, pursuing the economics thing. And then I’ve done clinical research coordination where no one really needed my stats skills at the beginning but it turned out to be quite useful.

So wherever I went, there would be some element of data or some element of decision-making where I felt I could contribute. And my passion has just been being useful. I just want to be useful.

So I mean, the dream is a world in which you need none of my skills and I can just go sit on a beach in Thailand and not worry about it because people are teaching wonderfully. People are communicating wonderfully. People are handling all their data wonderfully. People are making all the great decisions. And great, you don’t need me. That’s a beautiful world. I want that world. [laughs] I can just not work and read books all day and that’ll be great.

Unfortunately, there always seems to be ways to be useful and make things better and so that’s the kind of way that I’ve fallen into it. And realizing that putting the data plus the decision-making together just gives you so much more is how I was always doing both. And it was very awkward before there was a thing like decision intelligence. Even before I knew that there was something called data science and decision science, because I’m a statistician who also does, well, not quite statistical inference sometimes. Sometimes I’m automating something: that turns out to be machine learning. Sometimes I’m just exploring the data for insights: that’s analytics.

It wasn’t really clear. You would be part of some community who know exactly what their thing is. They don’t really know about the other things or what it’s called. So it was quite confusing but I just sort of barreled through it being like, “Right, I’m going to be useful. What’s the useful thing to do right now?” And eventually I realized that if we break down some of those artificial walls between the disciplines and we really share what we know, and we say, “Okay, maybe we’ve been studying decision science our whole lives, but decision-making is turning information into better action.” So is there someone who knows how we could do the information part better? We should be friends with them and we should be talking to them. We should be learning some of that stuff. Collaborating and then once that group is a little successful, then they will give themselves a new name.


“Decision-making is turning information into better action.”  — Cassie Kozyrkov


So I think that my career has been this collection of things that happened to be useful, that didn’t really have a name and I guess lucked out, but now there’s some nice terms in the industry for it. But if that never came about, I would still be doing what I was always doing. [laughs]

Joe: Well, your disposition is to constantly be adding comparative advantage, right? You, that’s just the way you’re built. You’re always looking for how to add value. I want to go back to that definition you just gave, that working one about, “decision-making is turning information into better action.” So I’m actually really curious about what you mean by better there? What makes one on action better than another for you?

Cassie: Ah, that would be for me?

Joe: Yeah.

Cassie: So this is very personal to the decision-maker, and “better” has to be defined by the person whose responsibility it is to make that decision. So, you know, I’m often asked: AI as a decision-maker versus humans as decision-makers, how are they different? What are the roles? What are the differences? To which I say: there is only one decision-maker, ever. It is the human, because that is the person who identifies the need, who says how valuable it is, and then who signs off on the solution that we’re going to use to meet that need. All of those things are deeply subjective.

Your perspective on value, your perspective on what needs doing, that’s going to vary from person to person, from society to society. It’s not like we’re going to build some kind of system and it’s going to have the right answers for everything forever. [laughs] I guess our values as a society are likely to change and then that system will be obsolete. So it is always the human that’s doing that subjective piece, who decides what it means for it to be better.

And then once we’ve got that, then we have to worry about all the other stuff. What’s the best way in which we can assist that identified need? And how do we also check that we thought of everything we should have thought of when we were figuring out what’s worth doing? Do you have that breadth, that scope of thinking, that open-mindedness in considering all the options? People will say, “Oh, should I do A or B?” Well, is this even the decision that’s worth making? Before we ask whether you should do A or B, I want to know that you thought of the whole universe pretty much. Okay, well, whatever amount of it will fit in your head! But the more breadth I see, the more confident I am in the quality of that decision-maker.


“I’m often asked: AI as a decision-maker versus humans as decision-makers, how are they different? To which I say, there is only ever one decision-maker. It is the human, because that is the person who identifies the need, who says how valuable it is, and then who signs off on the solution that we’re going to use to meet that need.”  — Cassie Kozyrkov


Joe: Could you just give an example of that, maybe from your recent work or someone who you’re advising? [What’s] the kind of decision you’re talking about that someone might be trying to frame or figure out the alternatives [for]?

Cassie: Well, that’s a whole world of stuff. Let me say, for example, this morning I was advising a startup and they were interested in creating metrics for their system. Essentially, there is one very obvious metric that they could go for, for one very obvious kind of decision. And to improve this metric is going to be quite expensive. So this is actually going to take away some of their time, effort and budget. And there are all these other things that they could also do.

[They need to ask themselves] “Have we really considered whether optimizing this metric 10% or 15% is the target to set?” And “have we really thought about what our steps are to do that?” That assumes that you’ve got the right decision. What we want to see is that you’ve considered all the other factors as well. Maybe this is not the technology where you want to make your improvement. Maybe you want to fix some other part of your product. Maybe that’s where the more urgent need is. So have we actually thought about the whole space before we ask, “should I take action one or action two on this specific thing?”

Another example is another kind of interpersonal example, someone asking, “Should we buy a house in a neighborhood that’s close to good schools, or should we continue to rent?” When I hear that, I hear that it’s already super narrow in scope. [It assumes] that schools are the factor, that you should live in this country, that renting and buying are your only options. Maybe you should rent and have investment properties? Maybe you should take an investment forward approach?

When I hear that kind of “A or B,” I want to know that you’ve thought about other possibilities before you’ve narrowed it down. And that comes to a kind of analytics mindset, taking an analytics approach. In order to find the questions worth asking, you have to inform yourself of your options, and you have to do that as quickly as possible to span as much of the space as possible without committing until it’s a good time to commit, until you’re informed. And so permitting yourself not to get bogged down in the details, not to take any of it too seriously, not to get worried about whether you are using the right kind of mathematics, but to just go and have a quick look at your reality, as fast as possible to see what your unknown unknowns are, as many of them as possible. Then, at that point, you might be qualified to ask a question that’s worth asking.


“Do you have that breadth, that scope of thinking, that open-mindedness in considering all the options? … The more breadth I see, the more confident I am in the quality of that decision-maker … People will say, ‘Oh, should I do A or B?’ Well, is that even the decision that’s worth making?… I want to know that you’ve thought about other possibilities before you’ve narrowed it down.” 
— Cassie Kozyrkov


Joe: Right. Okay. So let’s say you do that quick scan and you think you have a question that’s worth asking and now what’s the next step? What do you encourage people to do at that point?

Cassie: A lot of themes on the decision side are: have you thought broadly enough? Have you considered your possibilities? Have you got a good structure to the “judgment” piece in “judgment and decision-making”? Have you picked your heuristics well?

It’s a subtly “frequentist or Bayesian” question, statistical philosophy.

So if we’re going to work under uncertainty and do some kind of decision-making, then what I’m very aware of … I’m painfully aware that many statisticians aren’t actually taught during their training about statistics, that statistics is the science of changing your mind. That’s fundamentally what it’s about.

I’ve flip flopped between frequentist and Bayesian. I was educated at Duke which is to Bayesian statistics as the Vatican is to Catholicism. And then I’ve also been involved with universities that are like, “Oh, those Bayesians are some crazy people.” So I’m both. I will just declare that for the record in case someone thinks I’m bashing one side or the other.

The Bayesian statistician has a mathematically described opinion and uses data to update that opinion and then uses this object — this mathematical object that represents their opinion — to convert it to actions or decisions of some kind. So we can say that Bayesian change their minds about their opinion.

Frequentists, the more classical statistics, the stuff you see in STAT 101, the one with p-values, conference intervals, that stuff, that is about changing your mind about actions directly. We start with the actions, and the connection between the actions and the opinion is straightforward and clean.

Because in Bayesian thinking, you say, at the end of the day, I’ve got all these different opinion objects I could end up with. Now, I have to also declare beforehand for which ones I am taking which blend of actions, which can get quite complicated.

Whereas the frequentist thinking is super simple. So this is where I would usually start with folks who might not have a whole lot of background in this, I’d say, “Right, so we’re going to  make a decision under uncertainty.” We might use a frequentist framework. In order to do that, the first thing that you need to do is to ask yourself what you would do in the absence of any additional information. So we need to come up with a default action.

So I’m saying, “No, no, don’t worry about the data for a moment. Let’s just say, you have to act right now. You just commit to something, what are you going to  do? Are you going to buy the house? Is that the one you’re going to do with no information? Or are we actually trying to convince us, are we allowing data to convince us to buy the house, but by default we rent? Which one is the default?”

Really, honestly answer that.

And when folks are having a very hard time coming up with what they’re trying to do here, that means that there’s still a lot of work to be done in terms of working through the why: why we’re here, why we want to make this decision, why it’s important. We know we’re not ready. And I see a huge mistake that a lot of data folks make, is they just skip right past that. So they’re like, “Okay, the thing we want to prove is this. Nevermind why, nevermind what for. Now let’s get the data and prove it.” And it creates chaos, essentially. And what you don’t want to see is these very, belligerent discussions about whether this method or that method is correct when no one has even done the first parts.

Joe: I think I understand. So if I have a default preference or my default action is going to be to buy the house, then my alternative is that I’m going to rent. That’s how a frequentist thinks about it. And now I’m going to maybe go look for information to disprove my buying the house, right?

Cassie: So the way that I suggest folks think about this if they’re embarking on frequentist data science is that you have to first think about the “no information setting,” then the “full information setting,” and then the “partial information setting.” So the no information setting is if I have no information whatsoever in addition to this, what am I going to physically do? Not what’s true. Not what I know? Not what do I believe? I don’t believe anything, but what am I going to  do?

Joe: Oh, okay.

Cassie: For example: here’s a new medication. We have no idea if it works. By default, what am I going to do with it? Well, if I’m the decision-maker, by default what I’m going to do with it is not eat it, unless the data can convince me that it does in fact work. So by default, if I have no further information, this is what I’m going to do. Now, a different decision-maker might make the decision in a different way, is what we do with no information is a human thing.

So you start with a no information setting: this is what I will do if I don’t know anything else.

Okay. Now I go to “full information.” If I were omniscient, let me think about all the information [that] if it were true, would make me happy about what I’ve chosen to do. All the universes in which my default action is a happy choice. If I had access to the information, if I had perfect information, if I knew for a fact that the efficacy of this medication was, you know, it gets rid of my headache after two minutes. Okay. Yes. Then I want to take it. Great. That goes in the alternative hypothesis set. Okay. And then what if it was three minutes? It’s all this “what if”  thinking that generates your hypotheses for you.

Joe: Okay.

Cassie: But if you don’t even know what kind of “what ifs” are possible, how are you going to make those hypotheses, right? You are not actually that omniscient. [laughs] It’s very hard to sit and think in a closet by yourself about, you know, “what actions are on the table” and “what would have to be true for you to be happy about one versus the other.” So that’s where the analytics comes in, way before you start.

Anyway, now you’ve done these two things. You’ve gotten your alternative and your null, and now you deal with the partial information. You say, “Great, because I’m not omniscient I can’t just look up the answer.” Sometimes I can, with some decisions it’s fantastic. You’re like, “Oh, all I need to know is what last year’s revenue was. And then if it’s above this [amount,] I do it, otherwise I don’t.” So it’s beautiful. It’s fantastic. Then you’re finished.

But usually what you’ll see is that you have an incomplete view on what you care about. And so that means you’re dealing with partial information. You don’t want to be dealing with probability if you don’t have to. But if you recognize that in order to make your decision, you’re going to have to deal with something less than what you care about. Well, at least we have some tools for you. But here’s the thing with dealing with partial information. Now, because you don’t have all the information, you could be wrong. Because all that you get out of the statistics framework is that you get to make a decision at a quality that you have selected. That’s it. The math guarantees that quality. But if you’re not even conversant in thinking about setting that quality, then what is the point of all your p-values? So it comes right back to decision-making again. So you have to fall over that path, it has to be: no information, full information, partial information, and at what quality.

Joe: That was such a great description, Cassie. I really appreciate it. I’m almost reluctant to ask you this, because it feels like I’m just asking you to work, but could you explain then, so how would Bayesian look at that?

Cassie: Right. So with the Bayesian stuff I like to say that you have to commit. This is quite difficult. And for me, one of the tricky things in the kind of Bayesian versus frequentist debate is, well, first, both are subjective because there’ll be assumptions. We’re dealing with partial information in both settings. So it’s not like one of them is more subjective than the other, but the difficulty with the Bayesian stuff is that you do have to tie it to action, but your manner in which you tie it to action is much more customizable. Now, that sounds good, but it’s terrifying.

I like to think of frequentist statistics as a light switch. I do or I don’t do. That’s it. And then the decision is: what does it take for me to flip it one way versus the other? Whereas the Bayesian setup, it’s that cockpit of an airplane. There are a lot of buttons. Because you’re saying: I have this mathematical object, this function, this distribution, essentially, and I’m learning the shape of this mathematical function with data. Now I have to also declare before I begin, for which versions of this thing, am I going to take which kind of action. If you’re very fluent with working with these functions then it can be quite easy for you to say, “okay, if the mass of the distribution is at least here or between there, then under that situation, I would like to go to the kitchen. And if it’s like that, then I would like to sit down. And if it’s like this, then I would like to do none of the above.” You can have that kind of setting.

However, here’s the thing that’s really difficult. The decision-maker themselves has to be pretty sophisticated, or the statistician has to be really well versed in basically interrogation techniques, to extract from the decision-maker how you should relate this mathematical object to a set of potential actions. And so if you have the privilege of a lot of training in Bayesian statistics, then you can happily make decisions in a Bayesian manner.

Joe: So when you look at the work that organizations like Google are doing and you see the strengths that they have: you know, they’ve got behavioral scientists, they’ve got cognitive psychologists, they’ve got decision scientists and they’ve got data scientists and they’ve got statisticians and economists. So they’ve got all these tools, but there’s another tool we haven’t talked about yet, which is machine learning, AI, and deep learning. Does that complicate this? Is it just magnifying it? Amplifying the problem? Does it lean to one side or the other? What happens with that?

Cassie: Right. So the way I like to think of the kind of classical machine learning, that’s not your unsupervised machine learning for art stuff — I think yesterday all of Twitter was playing with with anime GANS, generative adversarial neural networks, where you upload your photograph and it makes anime version of you, that arty stuff — That’s not what I’m talking about here.

I’m talking about software that will do anything from deciding whether the image that you show it is a cat or not a cat, that kind of decision, or how should I move the thermostat or the joystick, the self-driving car, which action should I take? What’s the correct label for this text input? What are the letters represented so I can look at written down text and convert it automatically? That kind of stuff, those kinds of systems. Those systems are automating the decision-making.

And so at the end of the day, I would say it’s very dangerous to think of those things as doing anything autonomous. What they’re actually doing is extending or encoding whoever was in charge of building that system, the way those decision-makers believe the decision should be taken. [laughs]  Because at the end of the day, AI really boils down to two lines. There’s a lot of code and difficulty to get the thing to work, sure, but hidden in there, all that code is in service of just two things: which examples should we use, and what does success mean? Right? How do we know if it’s succeeding?

So someone, some human has to say, this is what I mean when I say it works. So a classic example is, I do this little demonstration to kind of connect this with decision-makers. So I say, okay, we’re going to do the most basic thing. We’re going to build a “cat, no cat” machine learning classifying system. We’re just going to give it a photograph. And then, you know, you say “cat” or “no cat.” But before we allow the machine learning system to do this, I’m going to  have you be my machine learning system. So audience, please shout “cat” or “no cat.”

So, I point first to an obvious cat and I get “cat!” loudly and then I’ve got a dog there, and some kind of rodent, and people are getting it. And then I point at a tiger, at which point half the room goes, cat, big cat, maybe cat, not cat, and now there’s no consensus. So that is the decision-maker’s job to do that. There is no one right answer. It really depends on why you’re building the system. What is it supposed to do? What is its purpose? What would you as the decision-maker like it to do in this situation? And then how you develop its capability and how you handle the data and how you ask it to code that. And is that part of it, the successes or failures, that is on you.

So you have to be quite clear as the decision-maker, how you want this handled, how you want it scored, which data you want it to learn from. And then all it’s going to do is extend and accelerate that like many copies of you. And so what I tell people is that it’s really exciting. It’s so exciting to be able to extend yourself like that, but as we enlarge ourselves, this will make it easier to step on the people around us. So that means that we have to be so careful. We have to train ourselves. We have to take that responsibility of being a decision-maker who is kicking off a technology that has such a massive span of influence. We have to be so careful. And I believe, we all have such a great responsibility to make ourselves worthy of that if we want to be leading systems based on this kind of technology.


“Machine learning [will decide]… how should I move the joystick [on] the self-driving car, which action should I take? … Those systems are automating the decision-making. … it’s very dangerous to think of those things as doing anything autonomous. What they’re actually doing is extending or encoding whoever was in charge of building that system, the way those decision-makers believe the decision should be taken. … It’s so exciting [but] … this will make it easier to step on the people around us. So that means that we have to be so careful.”  — Cassie Kozyrkov


Joe: I’m so glad you went there because as you were talking about it and the amplifying of that decision-maker, I was thinking about all the affected users — maybe not even intended users — who are stakeholders in that decision, but don’t have any voice in it. And I’d love to have you come back and talk deeply about that issue and how your whole profession is thinking about it, where you’re trying to lead it? Who should be in the room? How do we get them in there? How do we educate the people who are in the room about it?

Cassie: Yeah. I did write a short blog post on this topic.

Joe: Oh, we’ll link it.

Cassie: So if listeners are interested in this, it’s called The Ultimate Guide to Starting AI. And in there I am talking about who should be part of the process and I’m highlighting that you need to have user experience expertise, and not only user experience expertise in the room, but give your users a voice. Make sure that you’re not hurting your users with this system when you think about what success looks like. That movement when you’re saying: when the system is successful, here’s how it works. Well, is that actually a good thing for the world? Is it good for everybody? Did you do your homework? And now it comes again, back to that fundamental skill, that fundamental separator between the weak decision-maker and strong decision-maker. How big is that span? How carefully did you think about everything that you should have thought about?

Joe: I can’t wait for that conversation. We’ll definitely link the blog and then love to have you back.

We usually close that with three quick questions and you may have gotten them ahead of time from us, but if not, I don’t think these are going to be a challenge for you at all. So the first one is, Cassie, what would look different in society if we succeed in our mission to ensure that Decision Education is part of every middle and high school student’s learning experience? What’s one thing, or some things, you would see that would be different?

Cassie: So I did get these questions ahead of time. And the funny thing is that I can do your question one and your question two kind of together.

Because your question two is about recommending a book, what book I think people should read. And, you know, I am a huge fan and I’m biased in favor, of course, towards the Alliance for Decision Education. I’m a huge fan of Annie Duke’s book [Thinking in Bets]. And it really struck me, there’s this part she wrote in there that I think no one has ever said this better, and I think many writers talk about how the quality of your decisions is the only lever that you have really in how your life turns out because everything else is luck. So how you make decisions, that’s your control over your reality.

But here’s the beautiful thing that she wrote. So she said, even if you, if you think about tiny improvements, tiny improvements really compound over time. So if you think about a ship leaving the harbor, and I don’t know where she said it starts New York or London, whatever. We’re going to have a ship go between New York and London. If we have a one degree navigation error, that ship, if it goes a few yards, that’s not a big deal. But that one degree over the course of the journey from New York to London, that’s going to compound into a big, big mistake. You’re not going to end up at your optimal destination in this metaphor — and in life — if you allow that small error to compound.

And so having Decision Education is a really powerful thing because before those ships have left harbor — while we still have kids young in middle school and high school — we can correct that one degree so that that can compound into a really good life for them. They get to the destination that is optimal for them. And of course, we talked earlier about how this is a very personal thing.

So we can’t tell them what it means for them to have their best life, but we want to arm them with the skills that will allow them to reach their own best lives rather than veer off course, even slightly, slightly, slightly, but in a compounding way. So I think it’s hugely powerful. I’m a big fan.


“The quality of your decisions is the only lever that you have really in how your life turns out, because everything else is luck. So how you make decisions, that’s your control over your reality…
Tiny improvements really compound over time. [and] You’re not going to end up at your optimal destination … in life if you allow [a] small error to compound.” 

And so having Decision Education is a really powerful thing because …while we still have kids young in middle school and high school, we can correct that one degree so that that can compound into a really good life for them. They get to the destination that is optimal for them. So we can’t tell them what it means for them to have their best life, but we want to arm them with the skills that will allow them
to reach their own best lives.” 
— Cassie Kozyrkov


Joe: Well, thank you so much for that. And Cassie, thank you for coming on the podcast. If listeners want to learn more about your work, which I imagine many will, or follow you on social media, where should they start? Where should we send them?

Cassie: Well, I post a lot of things on Twitter. I have a blog. So if you prefer to read, you can go to the blog. I also read some of those blogs out for people who prefer to listen and I guess there’s a podcasting audience here. So you can find that on SoundCloud or LinkedIn. And then I also have educational videos on YouTube. So whatever your poison is: sound, video, text I’ve got some of that for listeners. [laughs]

Joe:  All right. So for all those links and any books or articles mentioned today, check out the show notes on the Alliance site, where you can also find a transcript of today’s conversation. Cassie, thank you so much. I look forward to talking with you again soon.

Cassie: It was such a pleasure to be here, Joe. Thanks for having me.

Share this episode to your favorite platform!

Check out our other latest episodes

  • Episode 029:

    Changing Minds in a Polarized World

    with David McRaney

    Why do people sometimes become more entrenched in their beliefs when they are challenged? In this episode, David McRaney, science journalist and creator of the [...]

  • Episode 028:

    Rethinking the Workplace

    with Dr. Adam Grant

    Can giving advice actually be more valuable than receiving it? In this episode, Dr. Adam Grant, organizational psychologist and world-renowned author, joins host Annie Duke, [...]

Stay informed and join our mailing list