Page | 1
Photo by author
Special Edition Strong Opinions Loosely Held: Part 1 Science and Psychology Published on September 9, 2021 Dr. Chris Stout I come to this piece from a place of concern, not politics. It is spurred by some friends, aware of my weekly newsletter, and from reading social media posts of well-intentioned friends that I found to be a bit upsetting, as I believed their sharing was meant to be of good intention and help to others in their networks, but may have more of an iatrogenic impact. I have noted herein and numerous other places that the pandemic has spurred what has become a meme of multitudes having become Covid-19 experts, a la the Dunning-Kruger Effect. And less funny, the proliferation of many hysterical, inaccurate feeds and posts that get propagated on various social media platforms.
My only “horse in this race” is public health. As a note of context, I served on a Board of Health for over a decade, on an advisory Council to the Board of Health for another decade, did a Fellowship in Public Health (University of Illinois’ School of Public Health Leadership Institute), I founded and run a nonprofit center focused on global health, education, and humanitarian intervention, and in my undergraduate, graduate, and professional work, I was trained and worked Page | 2 as a scientist and clinician. Thus, this topic is very important to me—I have shown in past articles like How to Protect Yourself from Fad Science and my three part series, The Reproducibility Problem—Can Science be Trusted? , —Shame on us?, and —What’s a Scientist to do? If you know me, then you know this is a near-and-dear topic and concern of mine. I have organized it in five areas, purposely organized in this order: Science, Psychology, Misinformation, Vaccines, and Prosocial Ways to Respond. This is a long-form post that will take time to read. It will take even longer to read if you choose to look into the linked original sources and listen to the associated podcasts—which I sincerely hope you will do. It is my desire that this is not be a “TL;DR” kind of piece. Please be curious. Please read this from a perspective of wanting to learn and understand how others may hold a point of view that’s different from yours. As Covey put it, “Seek first to understand.” I am the poster-boy for seeking to understand what I am puzzled by. As a departure from my typical LinkedIn Influencer writing, this piece will be composed more so of curated and vetted pieces that speak to the point with not only greater eloquence, but more importantly, authority, research, and expertise. Also, I will continue to update this article as new information becomes available as a way to make it “evergreen” and I and others can refer readers to it who may find it of use. Let’s get started. I. Science The Half-Life of Facts In a prior post herein, I wrote that just about everything you read will have something to contradict it; even in peer reviewed, non-fake, scientific journals. Often my smart and athletic son has opined to me “why can’t there just be a routine to follow that works?” This was in response to his researching sub-three-hour marathon time training schedules, workouts, and diets. And he is right. The humorous but true introduction to medical school students… 50% of what we teach you over the next five years will be wrong, or inaccurate. Sadly, we don’t know which 50% …is quite true. Samuel Arbesman, a Harvard mathematician, coined the term “half-life of facts” in reference to the predictability of scientific papers’ findings to become obsolete. “What we think we know changes over time. Things once accepted as true are shown to be plain wrong. As most scientific theories of the past have since been disproven, it is arguable that much of today’s
orthodoxy will also turn out, in due course, to be flawed.” In medical science, it can be pretty quick—by some estimates a 45-year half-life. Mathematics does a better job as most proofs stay proofs. So, do keep in mind the dynamic nature of scientific understanding—better studies, new findings, more precise measurements, black swans, and whatnot all conspire to shake up the former orthodoxy. But also, there is bad science and biased science. Fortunately there are ways to counteract such without having to go to graduate school in biochemistry, thanks to tools like Spotting Bad Science 101: How Not to Trick Yourself and books like Ben Goldacre’s Bad Science. Good Science “The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that. . . . I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.” Richard Feynman addressing Caltech grads about good science The U.S. Is Getting a Crash Course in Scientific Uncertainty As the pandemic takes an unexpected direction, Americans again must reckon with twists in scientific understanding of the virus. When the coronavirus surfaced last year, no one was prepared for it to invade every aspect of daily life for so long, so insidiously. The pandemic has forced Americans to wrestle with life-or-death choices every day of the past 18 months — and there’s no end in sight. Scientific understanding of the virus changes by the hour, it seems. The virus spreads only by close contact or on contaminated surfaces, then turns out to be airborne. The virus mutates slowly, but then emerges in a series of dangerous new forms. Americans don’t need to wear masks. Wait, they do. At no point in this ordeal has the ground beneath our feet seemed so uncertain. In just the past week, federal health officials said they would begin offering booster shots to all Americans in the coming months. Days earlier, those officials had assured the public that the vaccines were holding strong against the Delta variant of the virus, and that boosters would not be necessary. The Food and Drug Administration has formally approved the Pfizer-BioNTech vaccine, which has already been given to scores of millions of Americans. Some holdouts found it suspicious that the vaccine was not formally approved yet somehow widely dispensed. For them, “emergency authorization” never seemed quite enough.
Page | 3
Americans are living with science as it unfolds in real time. The process has always been fluid, unpredictable. But rarely has it moved at this speed, leaving citizens to confront research findings as soon as they land at the front door, a stream of deliveries that no one ordered and no one wants. Page | 4 Is a visit to my ailing parent too dangerous? Do the benefits of in-person schooling outweigh the possibility of physical harm to my child? Will our family gathering turn into a superspreader event?
Living with a capricious enemy has been unsettling even for researchers, public health officials and journalists who are used to the mutable nature of science. They, too, have frequently agonized over the best way to keep themselves and their loved ones safe. But to frustrated Americans unfamiliar with the circuitous and often contentious path to scientific discovery, public health officials have seemed at times to be moving the goal posts and flipflopping, or misleading, even lying to, the country. Most of the time, scientists are “edging forward in a very incremental way,” said Richard Sever, assistant director of Cold Spring Harbor Laboratory Press and a co-founder of two popular websites, bioRxiv and medRxiv, where scientists post new research. “There are blind alleys that people go down, and a lot of the time you kind of don’t know what you don’t know.” Biology and medicine are particularly demanding fields. Ideas are evaluated for years, sometimes decades, before they are accepted. Researchers first frame the hypothesis, then design experiments to test it. Data from hundreds of studies, often by competing teams, are analyzed before the community of experts comes to a conclusion. In the interim, scientists present the findings to their peers, often at niche conferences that are offlimits to journalists and the general public, and hone their ideas based on the feedback they receive. It’s not unusual to see attendees at these meetings point out — sometimes harshly — every flaw in a study’s methods or conclusions, sending the author back to the lab for more experiments. Fifteen years elapsed from the description of the first cases of H.I.V. to the identification of two proteins the virus needs to infect cells, a finding crucial to research for a cure. Even after a study has reached a satisfying conclusion, it must be submitted for rigorous review at a scientific journal, which can add another year or more before the results become public. Measured on that scale, scientists have familiarized themselves with the coronavirus at lightning speed, partly by accelerating changes to this process that were already underway.
Treatment results, epidemiological models, virological discoveries — research into all aspects of the pandemic turns up online almost as quickly as authors can finish their manuscripts. “Preprint” studies are dissected online, particularly on Twitter, or in emails between experts. What researchers have not done is explain, in ways that the average person can understand, that this is how science has always worked. The public disagreements and debates played out in public, instead of at obscure conferences, give the false impression that science is arbitrary or that scientists are making things up as they go along. “What a non-scientist or the layperson doesn’t realize is that there is a huge bolus of information and consensus that the two people who are arguing will agree upon,” Dr. Sever said. Is it really so surprising, then, that Americans feel bewildered and bamboozled, even enraged, by rapidly changing rules that have profound implications for their lives?
Federal agencies have an unenviable task: Creating guidelines needed to live with an unfamiliar and rapidly spreading virus. But health officials have not acknowledged clearly or often enough that their recommendations may — and very probably would — change as the virus, and their knowledge of it, evolved. “Since the beginning of this pandemic, it’s been a piss-poor job, to say it in the nicest way,” said Dr. Syra Madad, an infectious disease epidemiologist at the Belfer Center for Science and International Affairs at Harvard.
Page | 5
Leaders in the United States and Britain have promised too much too soon, and have had to backtrack. Health officials have failed to frame changing advice as necessary when scientists learn more about the virus. And the officials have not really defined the pandemic’s end — for example, that the virus will finally loosen its stranglehold once the infections drop below a certain mark. Without a clearly delineated goal, it can seem as if officials are asking people to give up their freedoms indefinitely. One jarring backtrack was the mask guidance by the Centers for Disease Control and Prevention. The agency said in May that vaccinated people could drop their masks, advice that helped set the stage for a national reopening. Officials did not emphasize, or at least not enough, that the masks could be needed again. Now, with a new surge in infections, they are. “It can be really difficult for public perception and public understanding when these big organizations seem to reverse course in a way that is really not clear,” said Ellie Murray, a science communicator and public health expert at Boston University. It does not help that the C.D.C. and the World Health Organization, the two leading public health agencies, have disagreed as frequently as they have in the past 18 months — on the definition of a pandemic, on the frequency of asymptomatic infections, on the safety of Covid-19 vaccines for pregnant women. Most Americans have a decent grasp of basic health concepts — exercise is good, junk food is bad. But many are never taught how science progresses. In 2018, 15-year-olds in the United States ranked 18th in their ability to explain scientific concepts, lagging behind their peers in not just China, Singapore and the United Kingdom, but also Poland and Slovenia. In a 2019 survey by the Pew Research Center, many Americans correctly identified fossil fuels and the rising threat of antibiotic resistance, but they were less knowledgeable about the scientific process. And basic tenets of public health often are even more of a mystery: How does my behavior affect others’ health? Why should I be vaccinated if I consider myself low-risk? “People weren’t primed before to understand a lot of these concepts,” Dr. Madad said. “We should have known that we couldn’t expect the public to change their behaviors on a dime.” Both information and disinformation about Covid-19 surface online, especially on social media, much more now than in previous public health crises. This represents a powerful opportunity to fill in the knowledge gaps for many Americans.
Page | 6
But health officials have not taken full advantage. The C.D.C.’s Twitter feed is a robotic stream of announcements. Agency experts need not just to deliver messages, but also to answer questions about how the evolving facts apply to American lives. And health officials need to be more nimble, so that bad actors don’t define the narrative while real advice is delayed by a traditionally cumbersome bureaucracy. “They’re not moving at the speed that this pandemic is moving,” Dr. Murray said. “That obviously creates a perception in the public that you can’t just rely on those more official sources of news.” In the middle of a pandemic, health officials have some responsibility to counter the many spurious voices on Twitter and Facebook spreading everything from pseudoscience to lies. Risk communication during a public health crisis is a particular skill, and right now Americans need the balm. “There are some people whose confidence outweighs their knowledge, and they’re happy to say things which are wrong,” said Helen Jenkins, an infectious disease expert at Boston University. “And then there are other people who probably have all the knowledge but keep quiet because they’re scared of saying things, which is a shame as well, or just aren’t good communicators.” Health officials could begin even now with two-minute videos to explain basic concepts; information hotlines and public forums at the local, state and federal levels; and a responsive social media presence to counter disinformation. The road ahead will be difficult. The virus has more surprises in store, and the myths that have already become entrenched will be hard to erase. But it’s not too much to hope that the lessons learned in this pandemic will help experts explain future disease outbreaks, as well as other urgent problems, like climate change, in which individual actions contribute to the whole. The first step toward educating the public and winning their trust is to make plans, and then communicate them honestly — flaws, uncertainty and all. By Apoorva Mandavilli, https://www.nytimes.com/2021/08/22/health/coronavirus-covidusa.html?utm_source=pocket&utm_medium=email&utm_campaign=pockethits Making Sense of Covid Data Isn't Easy, Even with Charts New information is varied, can be contradictory and even misleading. It seems like not a day goes by without a new Covid-19 study to spark a bit of anxiety: Moderna Makes Twice as Many Antibodies as Pfizer, Study Says; Previous Covid Prevents Delta Infection Better Than Pfizer Shot; South African Scientists Say New Variant May Have ‘Increased
Page | 7
Transmissibility.’ Despite the flood of information, humanity is still struggling to answer countless Covid-related questions: Are breakthrough cases now the norm? Do I need a booster shot? Are my antibodies waning? Whether you Google these questions, or phone your mother to see what she thinks, the answers you receive are bound to be at best varied and at worst contradictory or even wrong. “A year and a Page | 8 half into the pandemic, Americans are more confused than ever about the risks they face, and that goes for experts and lay people alike,” writes Faye Flam in her column. So let’s get back to basics and begin with a simple fact: Vaccine protection is fading. But like a good pair of jeans that seem to fit better over time, this is a natural development. Vaccines still work! And your jeans still look great, even with that mustard stain! Those who got the jab earlier this year still have incredibly good protection against this virus, and even better protection against severe disease and death. You still might be wondering, how is my vaccine doing? Some of us got one shot, others two depending on the vaccine, and they’re all faring slightly differently in a delta-riddled world. In the U.K., the ZOE COVID Study wanted to find out whether any of their app contributors reported a positive test result between late May — when delta became the U.K.’s dominant strain — through the end of July. The study revealed that initial protection against the virus a month after the second dose of the Pfizer/BioNTech shot was 88%. After five to six months, protection fell to 74%. For the AstraZeneca jab, there was about 77% protection a month after the second dose, which fell to 67% after four to five months. Take comfort in these numbers because it’s not as though the clock has struck midnight and protection goes poof like Cinderella’s pumpkin carriage:
As we continue to track and examine the efficacy of vaccines, variants remain a large concern. One tweak to the virus’s genetic code can wreak havoc on entire countries, causing illness and hospitalizations to soar. It seems like there’s always a new Greek letter we’re learning about. Alpha, delta and now mu, which originated in Colombia. Although this latest variant of interest accounts for less than 0.1% of global Covid cases, it’s on the rise in South America. In Colombia, 39% of Page | 9 Covid infections have been linked to the mu variant:
The question is now whether we should supply already-vaccinated people in places like the U.S. with boosters or donate vaccines to prevent new variants like mu from breaking out in less economically developed parts of world. Although wealthy nations have seen their vaccinated numbers grow immensely in the past year, globally there’s still a long way to go:
On the booster front, there wasn't much consensus, in part because President Joe Biden didn't wait to get the blessing of health authorities before setting the target date for a late September booster rollout. That plan is now in doubt, explains health care columnist Max Nisen: "The desire for speed is understandable; people are scared and want to be as safe as possible. But setting a date before experts weighed in was a mistake.” Page | 10
By Jessica Karl and Lara Williams, https://www.bloomberg.com/opinion/articles/2021-0905/making-sense-of-mu-variant-covid-vaccine-protection-and-boosterskt75y245?cmpid=BBD090621_CORONAVIRUS&utm_medium=email&utm_source=newslett er&utm_term=210906&utm_campaign=coronavirus There have been 154 retracted COVID studies as of September 2021. The damage may already be done If a COVID study is retracted from a medical journal, does it make a sound—or at the very least seep into the public consciousness the way the now-pulled research originally did? It’s a rhetorical question (public awareness of study findings tend to stop at a retraction’s edge, unfortunately). But I ask it because as of September 1, Retraction Watch has tagged 154 retracted COVID-19 papers. And if longstanding evidence is any indication, very few of them will receive the level of traditional and social media play after they’ve been discredited than they did prior to an academic rebuke. That’s especially relevant to COVID containment efforts as the public health campaign against the coronavirus has been plagued by misinformation, whether on testing, treatments, vaccines, or even the origins of the virus itself. But even people who aren’t part of the tin foil hat crowd may buy into a study that carries an illusory sheen of prestige in the pages of a medical journal. And if you glance at the spectrum of studies tagged by Retraction Watch, which range from those promoting the use of the unproven horse and livestock parasite-fighter ivermectin for coronavirus to those supporting some of the more bizarre conspiracies about 5G networks giving people COVID, that’s a serious problem during a pandemic which has yet to crest. There are all sorts of reasons these various studies were retracted. Some didn’t receive proper informed consent from patients who were unknowingly used for such research; others just didn’t have verifiable or robust datasets to support sweeping claims such as hydroxychloroquine’s and ivermectin’s effectiveness against COVID. Science is a process, and mistakes happen along the way. This is why academic peer review and watchdog policies such as medical journal retractions exist in the first place. The trouble lay in what happens (or doesn’t happen) next. “Our findings reveal that retracted articles may receive high attention from media and social media and that for popular articles, pre-retraction attention far outweighs post-retraction attention,” write Stanford School of Medicine researchers in a paper published this past May.
There’s some nuance here. For instance, a retracted article may actually get some public attention if the big news event driving it is the retraction itself. Nearly 60% of retracted articles received most of their attention after retraction, according to the Stanford researchers. But things are a bit different (and harmful) when it comes to the most problematic studies. “However, this is not the case for the popular articles, which by the nature of being popular may also be the ones most likely to spread misinformation,” according to the authors. “These articles tend to receive 2.5 times the Page | 11 amount of attention received by their retraction after adjusting for attention received because of retraction.” As it turns out, some scientific bells simply can’t be unrung. By Sy Mukherjee, https://fortune.com/2021/09/02/retract-covid-papers-the-capsule/ "Simpson" strikes again At first glance, the Israeli data seems straightforward: People who had been vaccinated in the winter were more likely to contract the virus this summer than people who had been vaccinated in the spring. Yet it would truly be proof of waning immunity only if the two groups — the winter and spring vaccine recipients — were otherwise similar to each other. If not, the other differences between them might be the real reason for the gap in the Covid rates. As it turns out, the two groups were different. The first Israelis to have received the vaccine tended to be more affluent and educated. By coincidence, these same groups later were among the first exposed to the Delta variant, perhaps because they were more likely to travel. Their higher infection rate may have stemmed from the new risks they were taking, not any change in their vaccine protection. Statisticians have a name for this possibility — when topline statistics point to a false conclusion that disappears when you examine subgroups. It’s called Simpson’s Paradox.
This paradox may also explain some of the U.S. data that the C.D.C. has cited to justify booster shots. Many Americans began to resume more indoor activities this spring. That more were getting Covid may reflect their newfound Covid exposure (as well as the arrival of Delta), rather than any waning of immunity over time. ‘Where is it?’ Sure enough, other data supports the notion that vaccine immunity is not waning much. The ratio of positive Covid tests among older adults and children, for example, does not seem to be changing, Dowdy notes. If waning immunity were a major problem, we should expect to see a faster rise in Covid cases among older people (who were among the first to receive shots). And even the Israeli analysis showed that the vaccines continued to prevent serious Covid illness at essentially the same rate as before. “If there’s data proving the need for boosters, where is it?” Zeynep Tufekci, the sociologist and Times columnist, has written. Part of the problem is that the waning-immunity story line is irresistible to many people. The vaccine makers — Pfizer, Moderna and others — have an incentive to promote it, because booster shots will bring them big profits. The C.D.C. and F.D.A., for their part, have a history of extreme caution, even when it harms public health. We in the media tend to suffer from bad-news bias. And many Americans are so understandably frightened by Covid that they pay more attention to alarming signs than reassuring ones. The bottom line Here’s my best attempt to give you an objective summary of the evidence, free from alarmism — and acknowledging uncertainty: Immunity does probably wane modestly within the first year of receiving a shot. For this reason, booster shots make sense for vulnerable people, many experts believe. As Dr. Céline Gounder of Bellevue Hospital Center told my colleague Apoorva Mandavilli, the C.D.C.’s data “support giving additional doses of vaccine to highly immunocompromised persons and nursing home residents, not to the general public.” The current booster shots may do little good for most people. The vaccines continue to provide excellent protection against illness (as opposed to merely a positive Covid test). People will eventually need boosters, but it may make more sense to wait for one specifically designed to combat a variant. “We don’t know whether a non-Delta booster would improve protection against Delta,” Dr. Aaron Richterman of the University of Pennsylvania told me. A national policy of frequent booster shots has significant costs, financially and otherwise. Among other things, the exaggerated discussion of waning immunity contributes to vaccine skepticism.
Page | 12
While Americans are focusing on booster shots, other policies may do much more to beat back Covid, including more vaccine mandates in the U.S.; a more rapid push to vaccinate the world (and prevent other variants from taking root); and an accelerated F.D.A. study of vaccines for children. As always, we should be open to changing our minds as we get new evidence. As Richterman puts it, “We have time to gather the appropriate evidence before rushing into boosters.” Peter Attia, MD’s, Advice on a Better Understanding of Science Studying Studies: Part I – relative risk vs. absolute risk As we set off on our inaugural Nerd Safari, we think a primer on interpreting research—“studying studies,” so to speak—might be helpful. This Nerd Safari will be the first in a series that explores just that: how does one actually read and make sense of the barrage of “studies” cited? Relative and absolute risk, observational studies and clinical trials, power analysis and statistical significance, the myriad biases and threats to internal validity are some of the larger themes throughout the series. There’s almost an endless number of topics within topics (e.g., how do meta-analyses work and why we should be skeptical of them) that we may skim over today, but dig into tomorrow: in these cases we’ll cover a number of them in more detail in the future and update the links in this original series. Similar to going on a safari in the same place every few years, observing animals in their natural habitat, and enjoying different experiences each time: learning something new, seeing something new—when you come back to a Nerd Safari, you may pick up a new piece of knowledge or visit a new link on our site that wasn’t there before, and expand your knowledge. https://peterattiamd.com/ns001/ Here are other articles from this series • • • •
Studying Studies: Part II – observational epidemiology Studying Studies: Part III – the motivation for observational studies Studying Studies: Part IV – randomization and confounding Studying Studies: Part V – power and significance
John Ioannidis, MD, DSc: Why most biomedical research is flawed, and how to improve it “We need to defend our method. We need to defend our principles. We need to defend the honesty of science in trying to communicate it rather than building exaggerated promises or narratives that are not realistic.” —John Ioannidis https://peterattiamd.com/johnioannidis/ Why we’re not wired to think scientifically (and what can be done about it) What is it about being human that conflicts with being scientific? https://peterattiamd.com/wired-think-scientifically-can-done/
Page | 13
Randomized controlled trials: when the gold standard leaves you with fool’s gold If I told you that I read a randomized double-blinded placebo-controlled trial conducted over 5 years and carried out in over 18,000 participants, is there any scenario under which you would not believe it to be an excellent trial? https://peterattiamd.com/randomized-controlled-trials-whenPage | 14 the-gold-standard-leaves-you-with-fools-gold/ Scientists rise up against statistical significance Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects. https://www.nature.com/articles/d41586-019-00857-9?fbclid=IwAR3ZwWsntsWAEZQ4n4U80RaAtFIkHOe_XQbNe4zLeCxu4_ty_6xg6bkZ8E II. Psychology There Are Four Modes of Thinking: Preacher, Prosecutor, Politician, and Scientist. You Should Use One Much More You wouldn't use a hammer to try to cut down a tree. Try to use an axe to drive nails and you're likely to lose a finger. Different physical jobs call for different tools. So, too, do different mental jobs. Optimism and big-picture thinking will help you sell your business idea. Keeping your books in order requires a more detail-oriented approach. Motivating employees requires more empathy than analytical thinking. Different modes of thinking are best suited for different situations, and according to a new interview with star Wharton professor and best-selling author Adam Grant most of us don't utilize one particularly powerful mindset nearly enough. The 4 thinking modes Grant has been doing the rounds to promote his latest book, Think Again. He spoke to Inc.com's Lindsay Blakely about its lessons for business owners. He also found himself recently speaking with the Greater Good Science Center's Jill Suttie. Both interviews are well worth a read for Grant fans, but one particular point from Suttie's sticks out as useful for those interested in quick, actionable tips to boost their effectiveness. In the course of the interview, Grant outlines four distinct thinking styles we use to approach problems (the first three of which were outlined by Grant's Wharton colleague Philip Tetlock): 1. Preacher: "When we're in preacher mode, we're convinced we're right," explained Grant. From the salesman to the clergyman, this is the style you use when you're trying to persuade others to your way of thinking.
2. Prosecutor: "When we're in prosecutor mode, we're trying to prove someone else wrong," he continued. 3. Politician: It's no shock that "when we're in politician mode, we're trying to win the approval of our audience." 4. Scientist: When you think like a scientist, "you favor humility over pride and curiosity over Page | 15 conviction," Grant explained. "You look for reasons why you might be wrong, not just reasons why you must be right." "I think too many of us spend too much time thinking like preachers, prosecutors, and politicians," Grant said. Obviously the other modes of thinking can be useful--if you're in a pulpit, preach away. But Grant argues, these mindsets predispose us against changing our minds, even in the face of compelling new evidence. "In preacher and prosecutor mode, I'm right and you're wrong, and I don't need to change my mind. In politician mode, I might tell you what you want to hear, but I'm probably not changing what I really think; I'm posturing as opposed to rethinking," he explained. Think like a scientist, on the other hand, and you view your opinions more as hypotheses in need of confirmation or rebuttal. With that mindset, changing your mind not only isn't weak or embarrassing, it's a sign you're progressing. Ideally, that makes you not only willing to hear new points of view, but eager to seek out evidence that contradicts your opinions. It's a mindset that can be particularly valuable for entrepreneurs. One Italian study Grant mentions taught budding business owners to view their plans as hypotheses for testing. Compared to a control group, "those entrepreneurs that we taught to think like scientists brought in more than 40 times the revenue of the control group," he said (40 times!). Tips to think more like a scientist Some of the world's most successful leaders already understand the impressive benefits of this mode of thinking. Jeff Bezos, for instance, looks to hire those who change their minds often as this is a sign of just this sort of intellectually humble, scientist-style thinking. Two hundred years ago Ben Franklin confessed his own prosecutorial tendencies in his autobiography, and advised both himself and others to spend less time arguing and more searching out smart new ways of looking at the world. That, of course, comes easier to some of us than others. If changing your mind doesn't come naturally to you, there are steps you can take to nudge yourself into scientist mode more often. Grant outlines several in both the Greater Good interview and his conversation with Blakely, including thinking through what new information would change your mind about a topic and surrounding yourself with people willing to challenge your thinking.
Other experts have also offered tips on cultivating intellectual humility too. One clever idea is to remind yourself in the morning to figure out something you're wrong about each day. Another, popular with VCs and Silicon Valley insiders, is to repeat the mantra "strong opinions, weakly held" to nudge yourself to take definite stands but be unafraid of modifying your beliefs when new information comes to light. Page | 16
Some of these ideas may work better for you than others, but one thing is true for nearly all of us: Putting pride aside to think like a scientist is difficult. It's also extremely valuable. By Jessica Stillman, https://www.inc.com/jessica-stillman/there-are-4-modes-of-thinking-preacherprosecutor-politician-scientist-you-should-use-1-much-more.html Dunning–Kruger effect The Dunning–Kruger effect is a hypothetical cognitive bias stating that people with low ability at a task overestimate their ability. As described by social psychologists David Dunning and Justin Kruger, the bias results from an internal illusion in people of low ability and from an external misperception in people of high ability; that is, "the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others". It is related to the cognitive bias of illusory superiority and comes from people's inability to recognize their lack of ability. Without the self-awareness of metacognition, people cannot objectively evaluate their level of competence. https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect and https://en. wikipedia.org/wiki/Illusory_superiority Don’t confuse confidence for competence - Adam Grant Why People Feel Like Victims A fascinating study on how we perceive ourselves as victims, build identities on top of that perception, and the unlikely consequences. In a polarized nation, victimhood is a badge of honor. It gives people strength. “The victim has become among the most important identity positions in American politics,” wrote Robert B. Horwitz, a communications professor at the University of California, San Diego. Horwitz published his study, “Politics as Victimhood, Victimhood as Politics,” in 2018.1 He focused on social currents that drove victimhood to the fore of American political life, arguing it “emerged from the contentious politics of the 1960s, specifically the civil rights movement and its aftermath.” What lodges victimhood in human psychology? Power increases stereotyping and objectification of other individuals. And that can be a disaster.
In 2020, researchers in Israel, led by Rahav Gabray, a doctor of psychology at Tel Aviv University, conducted a series of empirical studies to come up with an answer.2 They identify a negative personality trait they call TIV or Tendency toward Interpersonal Victimhood. People who score high on a TIV test have an “enduring feeling that the self is a victim in different kinds of interpersonal relationships,” they write. Page | 17
The study of TIV is built around four pillars. The first pillar is a relentless need for one’s victimhood to be clearly and unequivocally acknowledged by both the offender and the society at large. The second is “moral elitism,” the conviction that the victim has the moral high ground, an “immaculate morality,” while “the other” is inherently immoral. The third pillar is a lack of empathy, especially an inability to see life from another perspective, with the result that the victim feels entitled to act selfishly in response. The fourth pillar is Rumination—a tendency to dwell on the details of an assault on self-esteem. You only need to spend only a few minutes watching or reading the news, in any country, to hear and see victimhood raging. We caught up with Gabray to get the science behind the headlines. Is TIV an aberration in the personality? Sometimes it may be, if one is high on the TIV scale. But we didn’t research clinical patients. That’s not what interested me. I’m interested in how this tendency appears in normal people, not those with a personality disorder. What we found was that like in a bell curve, most people who experience TIV appear in the middle range. You found a correlation between TIV and what you referred to as “anxious attachment style”, as opposed to “secure and avoidant” styles. What is the anxious style? Another way to say it is an “ambivalent attachment style.” So when a child is very young, and care is uncertain, perhaps the caregiver, or the male figures in the child’s life, don’t act consistently, sometimes they may act very aggressively without warning, or they don’t notice that the child needs care. That’s when the anxious attachment style or ambivalent attachment style is created. So victimhood is a learned behavior after a certain age. Yes, normally children internalize the empathetic and soothing reactions of their parents, they learn not to need others from outside to soothe themselves. But people with high TIV cannot soothe themselves. This is partly why they experience perceived offenses for long-term periods. They tend to ruminate about the offense. They keep mentioning they are hurt, remembering and reflecting on what happened, and also they keep dwelling on the negative feelings associated with the offense: hopelessness, insult, anger, frustration. People with high TIV have a higher motivation for revenge, and have no wish to avoid their offenders.
Why is it so difficult for people with a high degree of TIV to recognize that they can hurt other people? They don’t want to divide up the land of victimhood with other people. They see themselves as the ultimate victim. And when other people say, “OK, I know that I hurt you, but you also hurt me,” Page | 18 and want them to take responsibility for what they did, the person with TIV is unable to do it because it’s very hard to see themselves as an aggressor. In one of your studies, you conclude that TIV is related to an unwillingness to forgive, even to an increased desire for revenge. How did you come to that? In an experiment, participants were asked to imagine they were lawyers who had received negative feedback from a senior partner in their firm. What we found was that the higher the tendency of participants to perceive interpersonal victimhood, the more they tended to attribute the criticism by the senior partner to negative qualities of the senior partner himself, which led to a greater desire for revenge. Our study finds that not only do people with high TIV have a higher motivation for revenge, but have no wish to avoid their offenders. How does the fourth pillar of TIV, Rumination, reinforce this tendency? In the framework of TIV, we define rumination as a deep and lengthy emotional engagement in interpersonal offenses, including all kinds of images and emotions. And what’s interesting is that rumination may be related to the expectation of future offenses. Other studies have shown that rumination perpetuates distress and aggression caused in response to insults and threats to one’s self-esteem. Can one develop TIV without experiencing severe trauma or actual victimization? You don’t need to have been victimized, physically abused, for example, in order to exhibit TIV. But the people who score very high on TIV are generally those who have experienced some kind of trauma, like PTSD. Maybe they didn’t fully recover from it. Perhaps they didn’t complete therapy. Something we often see is that they tend to act very aggressively toward family members. How debilitating is TIV for those with moderate TIV? Does it affect everyday functioning? Yes. The higher the TIV, the more you feel victimized in all of your interpersonal relations. So if you are in the middle of the scale, you might feel yourself as a victim in one relationship but not another, like with your boss, but not with your wife and friends. But the more you feel like the victim, the more you extend those feelings to all of your interpersonal relationships. And then of course it can affect every aspect in your life. If you feel being victimized in your work, for example— we did a lot of experiments with the narratives of managers and workers—it means that you cannot let stand an offense by your boss, no matter how trivial. I think everyone knows that offenses in interpersonal relations are very common. I’m not talking about traumas, I’m talking about those small daily offenses, and the question is how you deal with it.
TIV aside, can there be a positive aspect of victimhood? There could be, when victims gather together for some common purpose, like a social protest to raise the status of women. When I’m talking about victimhood, I’m talking about something that has aggression inside it, a lack of empathy and rumination. But when you express feelings of offense in an intimate relationship, it can be positive. Because in that situation you don’t want to hide your feelings. You want to be sincere. You want to be authentic. So if always you’re trying to please, if all the time you say, “No, everything is fine, I wasn’t offended,” that doesn’t help the relationship. I think it’s very important to authentically express your negative feelings inside meaningful relations, because then the other side can be more impacted and you can have a real exchange. Do people high on a TIV scale tend to seek out lovers or friends who share the trait? That’s a very smart assumption, but it’s not something I empirically investigated. Theoretically, yes. I think that people who are very low on TIV, if they have this romantic relationship with someone who is high on TIV, then they would not want to continue the relationship. For the relationship to continue, you need two people who are high on this trait or someone who is like this and someone who has very low self-esteem, which is not the same as low TIV, someone who feels they don’t deserve a better relationship. Do people in most countries show this trait? There are very big differences between countries. For example, when I traveled in Nepali I found that their tendency for victimhood is very low. They never show any anger and they don’t tend to blame each other. It’s childish for them to show anger. Victimhood is also a matter of socialization. Yes, and you see it when leaders behave like victims. People learn that it’s OK to be aggressive and it’s OK to blame others and not take responsibility for hurting others. This is just my hypothesis, but there are certain societies, particularly those with long histories of prolonged conflict, where the central narrative of the society is a victim-oriented narrative, which is the Jewish narrative. It’s called “perpetual victimhood.” Children in kindergarten learn to adopt beliefs that Israelis suffer more than Palestinians, that they always have to protect themselves and struggle for their existence. What’s interesting is the way in which this narrative enables people to internalize a nation’s history and to connect past and present suffering. Can you extend this dynamic to groups that share this trait? It’s a very interesting question, but unfortunately I can’t say much about it. What I can say is that the psychological components that form the tendency for interpersonal victimhood—moral elitism and lack of empathy—are also particularly relevant in describing the role of social power holders. Studies suggest that possessing power often decreases perspective-taking and reduces the accuracy in estimating the emotions of others, the interest of others and the thoughts of others. So not only
Page | 19
does TIV decrease perspective, but power itself has the same effect. Additionally, power increases stereotyping and objectification of other individuals. So when you join TIV tendencies and the negative characteristics of the power holder together, it can be a disaster. What can we do to overcome victimhood? Page | 20
It begins with the way we educate our children. If people learn about the four components of victimhood, and are conscious of these behaviors, they can better understand their intentions and motivations. They can reduce these tendencies. But I hear people say that if they don’t use these feelings, if they don’t act like victims, they won’t achieve what they want to achieve. And that’s very sad. By Mark MacNamara, https://nautil.us/issue/99/universality/why-people-feel-like-victims Carol Tavris, Ph.D. & Elliot Aronson, Ph.D.: Recognizing and overcoming cognitive dissonance “If someone really is certain about something, they have almost certainly frozen their ability to change their minds when they need to.” – Carol Tavris Renowned social psychologists Carol Tavris and Elliot Aronson are the co-authors of Mistakes Were Made (But Not By Me), a book which explores the science of cognitive biases and discusses how the human brain is wired for self-justification. In this episode, Carol and Elliot discuss how our desire to reconcile mental conflicts adversely affects many aspects of society. The two give realworld examples to demonstrate the pitfalls in attempts to reduce mental conflict, or dissonance. The examples reveal that no one is immune to dissonance reduction behavior, how intellectual honesty can be trained and lastly, how to think critically in order to avoid engaging in harmful dissonant behaviors. (Here are the highlights, but the entire episode is great. CES) The theory of cognitive dissonance, and real examples of dissonance reduction in action [11:15]; The evolutionary reason for dissonance reduction, and cultural differences in what causes cognitive dissonance [30:30]; The great danger of smart, powerful people engaging in dissonance reduction [35:15]; Two case studies of cognitive dissonance in criminal justice [39:30]; The McMartin preschool case study—The danger in making judgements before knowing all the information [43:30]; How ideology distorts science and public opinion [56:30];
How time distorts memories [58:30]; The downside of certainty [1:05:30]; Are we all doomed to cognitive dissonance?—How two people with similar beliefs can diverge [1:09:00]; Cognitive dissonance in the police force [1:21:00]; A toolkit for overcoming cognitive dissonance [1:27:30]; Importance of separating identity from beliefs, thinking critically, & and the difficulty posed by political polarity [1:30:30]. https://peterattiamd.com/caroltavris-elliotaronson/ How Description Leads to Understanding Describing something with accuracy forces you to learn more about it. It can be difficult to stick with describing something completely and accurately. It’s hard to overcome the tendency to draw conclusions based on partial information or to leave assumptions unexplored. Describing something with accuracy forces you to learn more about it. In this way, description can be a tool for learning. Accurate description requires the following: 1. Observation 2. Curiosity about what you are witnessing 3. Suspending assumptions about cause and effect It can be difficult to stick with describing something completely and accurately. It’s hard to overcome the tendency to draw conclusions based on partial information or to leave assumptions unexplored. Some systems, like the ecosystem that is the ocean, are complex. They have many moving parts that have multiple dependencies and interact in complicated ways. Trying to figure them out is daunting, and it can seem more sane to not bother trying—except that complex systems are everywhere. We live our lives as part of many of them, and addressing any global challenges involves understanding their many dimensions. One way to begin understanding complex systems is by describing them in detail: mapping out their parts, their multiple interactions, and how they change through time. Complex systems are often complicated—that is, they have many moving parts that can be hard to identify and define. But the overriding feature of complex systems is that they cannot be managed from the top down. Complex systems display emergent properties and unpredictable adaptations that we cannot
Page | 21
identify in advance. But far from being inaccessible, we can learn a lot about such systems by describing what we observe. By Shane Parish, https://fs.blog/2021/07/description/ Why Is It So Hard to Be Rational? The real challenge isn’t being right but knowing how wrong you might be. I met the most rational person I know during my freshman year of college. Greg (not his real name) had a tech-support job in the same computer lab where I worked, and we became friends. I planned to be a creative-writing major; Greg told me that he was deciding between physics and economics. He’d choose physics if he was smart enough, and economics if he wasn’t—he thought he’d know within a few months, based on his grades. He chose economics. We roomed together, and often had differences of opinion. For some reason, I took a class on health policy, and I was appalled by the idea that hospital administrators should take costs into account when providing care. (Shouldn’t doctors alone decide what’s best for their patients?) I got worked up, and developed many arguments to support my view; I felt that I was right both practically and morally. Greg shook his head. He pointed out that my dad was a doctor, and explained that I was engaging in “motivated reasoning.” My gut was telling me what to think, and my brain was figuring out how to think it. This felt like thinking, but wasn’t. The next year, a bunch of us bought stereos. The choices were complicated: channels, tweeters, woofers, preamps. Greg performed a thorough analysis before assembling a capable stereo. I bought one that, in my opinion, looked cool and possessed some ineffable, tonal je ne sais quoi. Greg’s approach struck me as unimaginative, utilitarian. Later, when he upgraded to a new sound system, I bought his old equipment and found that it was much better than what I’d chosen. In my senior year, I began considering graduate school. One of the grad students I knew warned me off—the job prospects for English professors were dismal. Still, I made the questionable decision to embark on a Ph.D. Greg went into finance. We stayed friends, often discussing the state of the world and the meta subject of how to best ascertain it. I felt overwhelmed by how much there was to know—there were too many magazines, too many books—and so, with Greg as my Virgil, I travelled deeper into the realm of rationality. There was, it turned out, a growing rationality movement, with its own ethos, thought style, and body of knowledge, drawn heavily from psychology and economics. Like Greg, I read a collection of rationality blogs—Marginal Revolution, Farnam Street, Interfluidity, Crooked Timber. I haunted the Web sites of the Social Science Research Network and the National Bureau of Economic Research, where I could encounter just-published findings; I internalized academic papers on the cognitive biases that slant our thinking, and learned a simple formula for estimating the “expected value” of my riskier decisions. When I was looking to buy a house, Greg walked me through the trade-offs of renting and owning (just rent); when I was contemplating switching careers, he stress-tested my scenarios (I switched). As an emotional and impulsive person by nature, I found myself working hard at
Page | 22
rationality. Even Greg admitted that it was difficult work: he had to constantly inspect his thought processes for faults, like a science-fictional computer that had just become sentient. Often, I asked myself, How would Greg think? I adopted his habit of tracking what I knew and how well I knew it, so that I could separate my well-founded opinions from my provisional views. Bad investors, Greg told me, often had flat, loosely drawn maps of their own knowledge, but good ones were careful cartographers, distinguishing between settled, surveyed, and unexplored territories. Through all this, our lives unfolded. Around the time I left my grad program to try out journalism, Greg swooned over his girlfriend’s rational mind, married her, and became a director at a hedge fund. His net worth is now several thousand times my own. Meanwhile, half of Americans won’t get vaccinated; many believe in conspiracy theories or pseudoscience. It’s not that we don’t think—we are constantly reading, opining, debating—but that we seem to do it on the run, while squinting at trolls in our phones. This summer, on my phone, I read a blog post by the economist Arnold Kling, who noted that an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio). It makes sense, Kling suggested, for rationality to be having a breakout moment: “The barbarians sack the city, and the carriers of the dying culture repair to their basements to write.” In a polemical era, rationality can be a kind of opinion hygiene—a way of washing off misjudged views. In a fractious time, it promises to bring the court to order. When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones. And yet rationality has sharp edges that make it hard to put at the center of one’s life. It’s possible to be so rational that you are cut off from warmer ways of being—like the student Bazarov, in Ivan Turgenev’s “Fathers and Sons,” who declares, “I look up to heaven only when I want to sneeze.” (Greg, too, sometimes worries that he is rational to excess—that he is becoming a heartless boss, a cold fish, a robot.) You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“rational, adj.: Devoid of all delusions save those of observation, experience and reflection,” Ambrose Bierce wrote, in his “Devil’s Dictionary.”) You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus. Possibly, like Mr. Spock, of “Star Trek,” your rational calculations fail to account for the irrationality of other people. (Surveying Spock’s predictions, Galef finds that the outcomes Spock has determined to be impossible actually happen about eighty per cent of the time, often because he assumes that other people will be as “logical” as he is.) Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-
Page | 23
rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect. covid deniers and climate activists are different kinds of people, but they’re united in their frustration with the systems built by experts on our behalf—both groups picture élites shuffling PowerPoint decks in Davos while the world burns. From this perspective, the root cause of mass irrationality is the failure of rationalists. People would believe in the system if it actually made sense. And yet modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.” We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust. Rationality is one of humanity’s superpowers. How do we keep from misusing it? Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?). Rationality was obviously useful, but Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical? The Weberian definitions of rationality are by no means canonical. In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently. For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest. It’s because some people are better jugglers than others that the world is full of “smart people doing dumb things”: college kids getting drunk the night before a big exam, or travelers booking flights with impossibly short layovers. Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.) Galef tends to see rationality as a method for acquiring more accurate views. Pinker, a cognitive and evolutionary psychologist, sees
Page | 24
it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want. Intentions matter: a person isn’t rational, Pinker argues, if he solves a problem by stumbling on a strategy “that happens to work.” Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Page | 25 Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.” Metacognition emerges early in life, when we are still struggling to make our movements match our plans. (“Why did I do that?” my toddler asked me recently, after accidentally knocking his cup off the breakfast table.) Later, it allows a golfer to notice small differences between her first swing and her second, and then to fine-tune her third. It can also help us track our mental actions. A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts. In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before. Studying for a test by reviewing your notes, Fleming writes, is a bad idea, because it’s the mental equivalent of driving a familiar route. “Experiments have repeatedly shown that testing ourselves—forcing ourselves to practice exam questions, or writing out what we know— is more effective,” he writes. The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.” Fleming notes that metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational. In a section of her book called “Calibration Practice,” she offers readers a collection of true-or-false statements (“Mammals and dinosaurs coexisted”; “Scurvy is caused by a deficit of Vitamin C”); your job is to weigh in on the veracity of each statement while also indicating whether you are fifty-five, sixty-five, seventy-five, eighty-five, or ninety-five per cent confident in your determination. A perfectly calibrated individual, Galef suggests, will be right seventy-five per cent of the time about the answers in which she is seventy-five per cent confident. With practice, I got fairly close to “perfect calibration”: I still answered some questions wrong, but I was right about how wrong I would be. There are many calibration methods. In the “equivalent bet” technique, which Galef attributes to the decision-making expert Douglas Hubbard, you imagine that you’ve been offered two ways of winning ten thousand dollars: you can either bet on the truth of some statement (for instance, that self-driving cars will be on the road within a year) or reach blindly into a box full of balls in the hope of retrieving a marked ball. Suppose the box contains four balls. Would you prefer to answer the question, or reach into the box? (I’d prefer the odds of the box.) Now suppose the box contains twenty-four balls—would your preference change? By imagining boxes with different numbers of balls, you can get a sense of how much you really believe in your assertions. For Galef, the box that’s “equivalent” to her belief in the imminence of self-driving cars contains nine balls,
suggesting that she has eleven-per-cent confidence in that prediction. Such techniques may reveal that our knowledge is more fine-grained than we realize; we just need to look at it more closely. Of course, we could be making out detail that isn’t there. Knowing about what you know is Rationality 101. The advanced coursework has to do with Page | 26 changes in your knowledge. Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps. The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes, an eighteenth-century mathematician and minister. So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur. There are many ways to explain Bayesian reasoning—doctors learn it one way and statisticians another—but the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preexisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it. Consider the example of a patient who has tested positive for breast cancer—a textbook case used by Pinker and many other rationalists. The stipulated facts are simple. The prevalence of breast cancer in the population of women—the “base rate”—is one per cent. When breast cancer is present, the test detects it ninety per cent of the time. The test also has a false-positive rate of nine per cent: that is, nine per cent of the time it delivers a positive result when it shouldn’t. Now, suppose that a woman tests positive. What are the chances that she has cancer? When actual doctors answer this question, Pinker reports, many say that the woman has a ninetyper-cent chance of having it. In fact, she has about a nine-per-cent chance. The doctors have the answer wrong because they are putting too much weight on the new information (the test results) and not enough on what they knew before the results came in—the fact that breast cancer is a fairly infrequent occurrence. To see this intuitively, it helps to shuffle the order of your facts, so that the new information doesn’t have pride of place. Start by imagining that we’ve tested a group of a thousand women: ten will have breast cancer, and nine will receive positive test results. Of the nine hundred and ninety women who are cancer-free, eighty-nine will receive false positives. Now you can allow yourself to focus on the one woman who has tested positive. To calculate her chances of getting a true positive, we divide the number of positive tests that actually indicate cancer (nine) by the total number of positive tests (ninety-eight). That gives us about nine per cent. Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information. In the early hours of September 26, 1983, the Soviet Union’s early-warning system detected the launch of intercontinental ballistic missiles from the United States. Stanislav Petrov, a forty-four-year-old duty officer, saw the warning. He was charged with reporting it to his superiors, who probably would have launched a nuclear counterattack. But Petrov, who in all likelihood had never heard of Bayes, nevertheless employed Bayesian reasoning. He didn’t let the new
information determine his reaction all on its own. He reasoned that the probability of an attack on any given night was low—comparable, perhaps, to the probability of an equipment malfunction. Simultaneously, in judging the quality of the alert, he noticed that it was in some ways unconvincing. (Only five missiles had been detected—surely a first strike would be all-out?) He decided not to report the alert, and saved the world. Page | 27
Bayesian reasoning implies a few “best practices.” Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context. In a sense, the core principle is mise en place. Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat. But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities. Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done. Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful. But the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size. The rationality community has its own lingua franca. If a rationalist wants to pay you a big compliment, she might tell you that you have caused her to “revise her priors”—that is, to alter some of her well-justified prior assumptions. (On her mental map, a mountain range of possibilities has gained or lost probabilistic altitude.) That same rationalist might talk about holding a view “on the margin”—a way of saying that an idea or fact will be taken into account, as a kind of tweak on a prior, the next time new information comes in. (Economists use the concept of “marginal utility” to describe how we value things in series: the first nacho is delightful, but the marginal utility of each additional nacho decreases relative to that of a buffalo wing.) She might speak about “updating” her opinions—a cheerful and forward-looking locution, borrowed from the statistical practice of “Bayesian updating,” which rationalists use to destigmatize the act of admitting a mistake. In use, this language can have a pleasingly deliberate vibe, evoking the feeling of an edifice being built. “Every so often a story comes along that causes me to update my priors,” the economist Tyler Cowen wrote, in 2019, in response to the Jeffrey Epstein case. “I am now, at the margin, more inclined to the view that what keeps many people on good behavior is simply inertia.” In Silicon Valley, people wear T-shirts that say “Update Your Priors,” but talking like a rationalist doesn’t make you one. A person can drone on about base rates with which he’s only loosely familiar, or say that he’s revising his priors when, in fact, he has only ordinary, settled opinions. Google makes it easy to project faux omniscience. A rationalist can give others and himself the impression of having read and digested a whole academic subspecialty, as though he’d earned a Ph.D. in a week; still, he won’t know which researchers are trusted by their colleagues and which
are ignored, or what was said after hours at last year’s conference. There’s a difference between reading about surgery and actually being a surgeon, and the surgeon’s priors are what we really care about. In a recent interview, Cowen—a superhuman reader whose blog, Marginal Revolution, is a daily destination for info-hungry rationalists—told Ezra Klein that the rationality movement has adopted an “extremely culturally specific way of viewing the world.” It’s the culture, more or less, Page | 28 of winning arguments in Web forums. Cowen suggested that to understand reality you must not just read about it but see it firsthand; he has grounded his priors in visits to about a hundred countries, once getting caught in a shoot-out between a Brazilian drug gang and the police. Clearly, we want people in power to be rational. And yet the sense that rationalists are somehow unmoored from direct experience can make the idea of a rationalist with power unsettling. Would such a leader be adrift in a matrix of data, more concerned with tending his map of reality than with the people contained in that reality? In a sketch by the British comedy duo Mitchell and Webb, a government minister charged with ending a recession asks his analysts if they’ve considered “killing all the poor.” “I’m not saying do it—I’m just saying run it through the computer and see if it would work,” he tells them. (After they say it won’t, he proposes “blue-skying” an even more senseless alternative: “Raise V.A.T. and kill all the poor.”) This caricature echoes a widespread skepticism of rationality as a value system. When the Affordable Care Act was wending its way through Congress, conservatives worried that similar proposals would pop up on “death panels,” where committees of rational experts would suggest lowering health-care costs by killing the aged. This fear, of course, was sharpened by the fact that we really do spend too much money on health care in the last few years of life. It’s up to rationalists to do the uncomfortable work of pointing out uncomfortable truths; sometimes in doing this they seem a little too comfortable. In our personal lives, the dynamics are different. Our friends don’t have power over us; the best they can do is nudge us in better directions. Elizabeth Bennet, the protagonist of “Pride and Prejudice,” is intelligent, imaginative, and thoughtful, but it’s Charlotte Lucas, her best friend, who is rational. Charlotte uses Bayesian reasoning. When their new acquaintance, Mr. Darcy, is haughty and dismissive at a party, she gently urges Lizzy to remember the big picture: Darcy is “so very fine a young man, with family, fortune, everything in his favour”; in meeting him, therefore, one’s prior should be that rich, good-looking people often preen at parties; such behavior is not, in itself, revelatory. When Charlotte marries Mr. Collins, an irritating clergyman with a secure income, Lizzy is appalled at the match—but Charlotte points out that the success of a marriage depends on many factors, including financial ones, and suggests that her own chances of happiness are “as fair as most people can boast on entering the marriage state.” (In modern times, the base rates would back her up: although almost fifty per cent of marriages end in divorce, the proportion is lower among higher-income people.) It’s partly because of Charlotte’s example that Lizzy looks more closely at Mr. Darcy, and discovers that he is flawed in predictable ways but good in unusual ones. Rom-com characters often have passionate friends who tell them to follow their hearts, but Jane Austen knew that really it’s rational friends we need. In fact, as Charlotte shows, the manner of a kind rationalist can verge on courtliness, which hints at deeper qualities. Galef describes a typically well-mannered exchange on the now defunct Web site ChangeAView. A male blogger, having been told that one of his posts was sexist, strenuously defended himself at first. Then, in a follow-up post titled “Why It’s Plausible I’m Wrong,” he
carefully summarized the best arguments made against him; eventually, he announced that he’d been convinced of the error of his ways, apologizing not just to those he’d offended but to those who had sided with him for reasons that he now believed to be mistaken. Impressed by his sincere and open-minded approach, Galef writes, she sent the blogger a private message. Reader, they got engaged. Page | 29
The rationality community could make a fine setting for an Austen novel written in 2021. Still, we might ask, How much credit should rationality get for drawing Galef and her husband together? It played a role, but rationality isn’t the only way to understand the traits she perceived. I’ve long admired my friend Greg for his rationality, but I’ve since updated my views. I think it’s not rationality, as such, that makes him curious, truthful, honest, careful, perceptive, and fair, but the reverse. In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization. This is surely right in some cases, but not in all. One spring, when I was in high school, a cardinal took to flying at our living-room window, and my mother—who was perceptive, funny, and intelligent, but not particularly rational—became convinced that it was a portent. She’d sometimes sit in an armchair, waiting for it, watchful and unnerved. Similar events—a torn dollar bill found on the ground, a flat tire on the left side of the car rather than the right—could cast shadows over her mood for days, sometimes weeks. As a voter, a parent, a worker, and a friend, she was driven by emotion. She had a stormy, poetic, and troubled personality. I don’t think she would have been helped much by a book about rationality. In a sense, such books are written for the already rational. My father, by contrast, is a doctor and a scientist by profession and disposition. When I was a kid, he told me that Santa Claus wasn’t real long before I figured it out; we talked about physics, computers, biology, and “Star Trek,” agreeing that we were Spocks, not Kirks. My parents divorced decades ago. But recently, when my mother had to be discharged from a hospital into a rehab center, and I was nearly paralyzed with confusion about what I could or should do to shape where she’d end up, he patiently, methodically, and judiciously walked me through the scenarios on the phone, exploring each forking path, sorting the inevitabilities from the possibilities, holding it all in his head and communicating it dispassionately. All this was in keeping with his character. I’ve spent decades trying to be rational. So why did I feel paralyzed while trying to direct my mother’s care? Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive. An effective rationalist must be able to short the mortgage market today, or commit to a particular rehab center now, even though we live in a world of Bayesian probabilities. I know, rationally, that the coronavirus poses no significant risk to my small son, and yet I still hesitated before enrolling him in daycare for this fall, where he could make friends. You can know what’s right but still struggle to do it.
Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds. Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, Page | 30 rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved. I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart. I trusted that, unlike the minister in the Mitchell and Webb sketch, he would care enough to think deeply about my problem. Caring is not enough, of course. But, between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act. The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula. But, in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together. For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. By Joshua Rothman, https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-berational Please continue with Part 2 of this Special Edition – Misinformation, Vaccines, and Prosocial Ways to Respond - CES #
#
#
If you'd like to learn more or connect, please do, just click here. You can join my email list to keep in touch. Tools and my podcast are available via http://ALifeInFull.org. Click here for a free subscription to our Weekly LinkedIn Newsletter If you liked this article, you may also like: The Reproducibility Problem in Science—What’s a Scientist to do? (Part 3 in a series of 3) The Reproducibility Problem in Science—Shame on us? (Part 2 in a series of 3) The Reproducibility Problem—Can Science be Trusted? (Part 1 in a series of 3) Can AI Really Make Healthcare More Human—and not be creepy? How to Protect Yourself from Fad Science
Technology Trends in Healthcare and Medicine: Will 2019 Be Different? Commoditization, Retailization and Something (Much) Worse in Medicine and Healthcare Fits and Starts: Predicting the (Very) Near Future of Technology and Behavioral Healthcare Page | 31
Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology Healthcare Innovation: Are there really Medical Unicorns? Can (or Should) We Guarantee Medical Outcomes? A Cure for What Ails Healthcare's Benchmarking Ills? Why Global Health Matters Source: https://www.linkedin.com/pulse/special-edition-strong-opinions-loosely-held-part-1-drchris-stout/