Special Edition Strong Opinions Loosely Held: Part 3

Page 1

Page | 1

Special Edition Strong Opinions Loosely Held: Part 3 Science, Psychology, Misinformation, Vaccines, and Prosocial Ways to Respond Published on October 18, 2021 Dr. Chris Stout

Preface I started with writing two pieces (Part 1 and Part 2) from a place of concern, spurred by a few reader comments in response to my weekly Newsletter. I have a wonderfully diverse group of friends, some of whom post what I consider may indicate a lacking in an understanding of science or public health concerning the pandemic. I’m sure those comments were meant to be well-intentioned, but it nevertheless upset me. I did not want to create a stir or upset a friend by responding with a long form explanation as to what may have been an off-the-cuff repost of something that had resonated with their point of view, but was incorrect and could result in an unintended iatrogenic impact for other less informed readers of theirs.


So, instead I have organized a multipart, series of curated collections in response. They cover five areas, purposely organized in this order: Science, Psychology, Misinformation, Vaccines, and Prosocial Ways to Respond. This and subsequent pieces that I vet and compile follow the same areas but with newer points. Similar to my Newsletter’s format, there will be less of my writing as I take on the role of editor Page | 2 and curate the content from the voices of authors and experts. These are all long-form posts that take time to read. (They will take even longer to read if you choose to look into the linked original sources and listen to the associated podcasts—which I sincerely hope you will do.) They are not “TL;DR” pieces. Please be curious. Please read this from a perspective of wanting to learn and understand how others may hold a point of view that’s different from yours. As Covey put it, “Seek first to understand.” I am the poster-boy for seeking to understand what I am puzzled by. I hope you find this of use.

Science "Your first impulse should always be to find the evidence that disconfirms your most cherished beliefs and those of others. That is true science." The Laws of Human Nature by Robert Greene “If we are honest with ourselves, we have to admit that sometimes our assumptions and preconceived notions are wrong, and therefore, our interpretation of events is incorrect.” – Elizabeth Thornton

Studying Studies – Some of the Pitfalls of Epidemiology By Peter Attia, MD, 12 September 2021 I was recently a guest on my friend Tim Ferriss’s podcast discussing one of my favorite topics, which is how to read and interpret studies. As part of that discussion I referenced our five-part “Studying Studies” series on the topic as a good starting point to understand the limitations of scientific work. Much of the work we focus on in the Studying Studies series is aimed at exposing some of the pitfalls of epidemiology. This might leave you coming away thinking that experiments — that is, controlled experiments with randomization — as opposed to observational studies, are foolproof. But this is not the case. Experiments have a number of hurdles they must jump over in order to establish reliable knowledge: 1.

Poor design

2.

Poor execution


3.

Poor analysis

4.

Poor interpretation

5. And if that is not enough, even if 1-4 are done correctly, a typically scientificallyilliterate media will often layer on their version of some bastardized interpretation in an effort to make the study more appealing to the general population. Some of these issues were recently put on display when I came across a study titled “Rapamycin impairs bone accrual in young adult mice independent of Nrf2.” This group of investigators tested the impact of rapamycin on the skeletal structure of young mice and found that the mice receiving rapamycin had lower bone density and bone volume compared with control mice. Should this raise concern for those who believe rapamycin is potentially the most geroprotective molecule out there? At first blush, absolutely. Bone health is crucial for healthy aging. Given rapamycin’s remarkable potential to benefit human health, it’s important to understand its potential risks as well. There are risks with any drug, and rapamycin is no exception. Prior to 2009, rapamycin was primarily used as an immune suppressor for people receiving organ transplants to prevent them from rejecting the foreign organ. But the context — and the dosing — matters. Immune suppression occurs under daily dosing of rapamycin, but under different dosing and scheduling, rapamycin (and its analogs) can actually enhance immune function. I spoke with Lloyd Klickstein and Joan Mannick on the podcast about their work showing mTOR inhibition improves immune function in the elderly. A closer examination of the recent mouse paper demonstrated a few important details that should weigh heavily on any interpretation, beginning with the age of the mice when treated. To my delight, my friend Matt Kaeberlein (@mkaeberlein on Twitter) had already dug into the study himself and published a detailed and nuanced response on Twitter in the form of a 14-tweet thread, which is reproduced verbatim below (between the section breaks). I think Matt’s thread is worth sharing for two reasons. 1. Understanding the risks of rapamycin is important given that it seems to be the most geroprotective molecule we know about, and 2. §

It’s a great example of a study being poorly interpreted.

Page | 3


@mkaeberlein (1/14): A few days ago I chose to call out a misleading Tweet by my friend and colleague @lamminglab [Dudley Lamming] that appears to endorse a flawed interpretation of a new study testing the effects of rapamycin on bone in young mice: @lamminglab: “These data show that rapamycin may have a negative impact on the skeleton of adult mice that should not be overlooked in the clinical context of its usage as a therapy to retard aging and reduce the incidence of age-related pathologies.” Interesting You may wonder, what’s the point? @mkaeberlein (2/14): IMO, one reason for being on Twitter as an expert in #geroscience is to try to prevent misconceptions and misinterpretations that have the potential to damage the field. This appears to me as a classic example of how misinterpretation can potentially do great harm. @mkaeberlein (3/14): The study in question used very young mice that are still growing to test the effects of rapamycin on bone. They found that the mice receiving rapamycin had lower bone density. Importantly, no evidence for lower bone quality or bone frailty, but that was not discussed. @mkaeberlein (4/14): The interpretation here suggested these results should raise concern for use of rapamycin clinically to delay or reverse aspects of biological aging. What the paper and the Tweet failed to present is the obvious flaw in this interpretation. @mkaeberlein (5/14): Also neglected in the paper and Tweet was data from multiple studies showing that rapamycin in aged mice and rats can reverse age-associated bone loss. Two examples: 1. Periodontal bone in *aged* mice: “Rapamycin rejuvenates oral health in aging mice” 2. Trabecular bone in *aged* rats: “Rapamycin reduces severity of senile osteoporosis by activating osteocyte autophagy” @mkaeberlein (6/14): If the goal is to understand how rapamycin affects bone deposition and/or resorption in developing/growing mice, this is a perfectly good study. If the goal is to understand effects on aging, it is absolutely the wrong experiment. @mkaeberlein (7/14): I’m certain @lamminglab would reject any grant that proposed to treat young mice with an intervention and extrapolate outcomes to biological aging. In NIH reviewer-speak this is a “fatal flaw”. So, why didn’t reviewers catch it? And why does this matter for the field?

Page | 4


@mkaeberlein (8/14): Rapamycin has a bad reputation due to its historical use at high doses in sick patients, where there are significant side effects. This is used by uninformed and/or intellectually dishonest people to suggest it can't be used at lower doses to target biological aging. @mkaeberlein (9/14): I will continue to refute this misinformation with actual data. Rapamycin at low doses appears safer than most drugs. We and others are actively testing rapamycin in clinical trials and hundreds of people are taking it at low doses with AFAIK no serious adverse events. @mkaeberlein (10/14): Unfortunately, those who want to argue rapamycin is a “bad drug” won't address the actual data. Should we be careful and watch for side effects? Of course. Is there any reason to think we’ll see bad ones at doses under consideration. Nope. @mkaeberlein (11/14): How is this damaging to the field and to patients? Rapamycin is still the most effective and reproducible pharmacological intervention for delaying/reversing biological aging out there. @mkaeberlein (12/14): Grants don’t get funded and clinical trials don’t happen because of false perceptions around the safety profile of rapamycin. Progress has been and continues to be held back. @mkaeberlein (13/14): Rapamycin should have been tested for Alzheimer’s disease 10 years ago. I know from personal experience that misinformation about the side effects has kept this from happening. The people who spread this misinformation bear some responsibility. @mkaeberlein (14/14): The fact is that even most experts in the field, let alone “experts” in the Twitter-verse, won’t bother to actually look at the data. So, FWIW, I try to pick my battles and clarify misinterpretation/misinformation around rapamycin when I see it. § Matt’s thread highlights a very important rule when studying studies. Be skeptical of the evidence and the interpretation in experimental research. Source: https://peterattiamd.com/rapamycin-risks/

Ten errors in randomized experiments By Peter Attia, MD, 17 October 2021 A recent review discusses errors in the implementation, analysis, and reporting of randomization within obesity and nutrition research.

Page | 5


Anyone who’s read my stuff with any regularity is acutely aware of my disdain for the way many observational studies are conducted and interpreted in health and nutrition research, as well as my admiration for randomized-controlled trials (RCTs). Randomization, a method by which study participants are assigned to treatment groups based on chance alone, is a critical component in distinguishing cause and effect. Randomization helps to prevent investigators from introducing systematic (and often hidden) biases between experimental groups. But there are also many ways in which randomized experiments can fall short. Recently, David Allison and his colleagues published an excellent review discussing ten errors in the implementation, analysis, and reporting of randomized experiments — and outlined best practices to avoid them. David is the Dean of Public Health at Indiana University, where he conducts research on obesity and practices psychology. He is also one of the best statisticians in the world, and will be joining me soon as a guest on the podcast. I’ve provided a brief summary of his review below, but to anyone interested in improving their ability to read and understand research, I suggest reading the original text in its entirety. Here, I focus on the general point that while RCTs may be considered the gold standard for establishing reliable knowledge, they are also prone to error and bias. § A) Errors in implementing group allocation 1 | Representing nonrandom allocation methods as random Occasionally, in studies styled as “randomized,” participants are allocated into treatment groups by use of methods that are not, in fact, random. The review authors provide the example of a vitamin D supplementation trial in which the control group came from a nonrandomized cohort from another hospital. Lack of appropriate randomization can introduce selection bias: the selection of subjects into a study that is not representative of the target population. A 2017 analysis by John Carlisle suggested that nonrandom allocation may be a concern in many studies labeled as “randomized.” One of the trials flagged was the well-known PREDIMED trial. Study participants at high cardiovascular risk were randomly assigned to a Mediterranean diet supplemented with mixed nuts or olive oil, or to a low-fat diet. In some cases, whole households were collectively assigned to the same diet. Even more problematic, one of the sites in the trial assigned entire clinics to the same diet. However, the investigators did not initially report this, and they analyzed their data at the level of individual participants rather than at the level of

Page | 6


household or clinic. After discovering these problems in a post-publication audit, PREDIMED investigators retracted and reanalyzed the study, leading to various changes in findings. 2 | Failing to adequately conceal allocation Allocation concealment hides the sorting of trial participants into treatment groups, preventing researchers from knowing the allocation of the next participant, and participants from knowing their assignment ahead of time. Allocation concealment is different from blinding. Allocation concealment ensures the treatment to be allocated is not known before that participant is entered into, while blinding ensures either the participant or investigator (or both, in the case of double-blinding) remains unaware of treatment allocation after the participant is enrolled in the study. Studies with poor allocation concealment are prone to selection bias. Poor allocation concealment from participants can lead to bias when, for example, certain study participants prefer one possible treatment over another. Those participants may drop out of the study if they become aware that they will not receive their preferred treatment, potentially skewing the group populations. Poor allocation concealment from investigators can also lead to bias. Researchers may — consciously or unconsciously — place participants expected to have the best outcomes in the treatment group and those expected to have poorer outcomes in the control group. 3 | Not accounting for changes in allocation ratios When designing an RCT, one step in the process is determining the ratio of subjects to each group. It’s not always 1:1 – that is, one subject assigned to treatment for every subject assigned to placebo. Sometimes it’s necessary from a statistical standpoint to assign twice (2:1) or three times (3:1) as many individuals to the treatment group as the placebo group. Further, investigators may choose to change the ratios in the middle of a study for various reasons. However, changing the allocation ratio partway through a study requires corresponding changes to statistical analyses, which doesn’t always happen. Dr. Allison gives the example of a study investigating body weight changes associated with daily intake of sucrose or one of four low-calorie sweeteners. Participants were initially randomly allocated evenly among the five treatment groups (1:1:1:1:1). Because one group had a high attrition rate, the investigators changed to a 2:1:1:1:1 ratio halfway through the study, but they did not account for these different study phases in their statistical analyses.

Page | 7


4 | Replacements are not randomly selected In virtually all RCTs, some participants will inevitably drop out. One way that investigators try to mitigate this problem is by using intention-to-treat (ITT) analysis, which we discussed in more depth in this article on the efficacy vs. effectiveness of a time-restricted eating trial. In ITT analyses, every participant that is assigned to a treatment group must be included in outcome analyses, regardless of whether those participants followed the protocol or dropped out of the study. In some cases, investigators replace dropouts with more participants to ensure the study remains adequately powered. These replacements must be randomized to avoid another form of Error #3: changing allocation ratios. (For more information on statistical power, which represents the probability that a study will correctly identify a genuine effect, read Part V of our Studying Studies series.) B) Errors in the analysis of randomized experiments 5 | Failing to account for non-independence Sometimes groups of subjects are randomly assigned to a treatment together, but are analyzed as if they were randomized individually. For instance, an entire classroom might be randomized to one group while a separate classroom is assigned to another. These types of studies are referred to as cluster RCTs and are subject to error when they are powered and analyzed at the individual level instead of the group level. The PREDIMED study exemplifies this error, as groups of individuals within certain households or clinics were assigned to a treatment together, but the authors did not initially adjust their statistical analysis to account for clustering. 6 | Basing conclusions on within-group statistical tests instead of between-groups tests The strength of an RCT lies in its ability to compare the results between two or more groups. For example, I recently wrote about a study that randomized men to morning exercise, evening exercise, or no exercise. The investigators reported that nocturnal glucose profiles improved only in men who exercised in the evening. The improvement, however, was “in-group,” meaning that nocturnal glucose levels had improved relative to baseline values, not compared to the other groups in the study. The authors’ conclusion that evening exercise conferred greater benefit for glycemic control than morning or no exercise is thus an example of the Difference in Nominal Significance (DINS) error. This error occurs when differences in “in-group” effects are used to draw conclusions about differences in “between-group” effects, rather than directly comparing groups to each other. 7 | Improper pooling of data

Page | 8


Pooling data under the umbrella of one study without accounting for it in statistical analyses can introduce bias. Dr. Allison cites an example of a trial on the effects of weight loss on telomere length in women with breast cancer. Data were pooled from two different phases of an RCT with different allocation ratios (see Error #3), which wasn’t taken into account in the analysis. The different sites, subgroups, or phases of a study need to be taken into account during analysis. Otherwise, any differences in the subsets of data being pooled together can bias the estimation of an effect in the trial. 8 | Failing to account for missing data Missing data — whether due to dropouts, errors in measurement, or other reasons — may not occur completely at random, breaking the randomization component of the study and introducing bias. The review authors provide the example of a trial of intermittent energy restriction vs. continuous energy restriction on body composition and resting metabolic rate. The study had a 50% dropout rate, yet only data from participants who completed the protocol were analyzed. (This is an example of “per protocol” analysis, in which data from noncompliant subjects is removed from analyses.) Reanalysis of the study including all participants halved the magnitude of effect estimates compared with original reported results. Investigators may mitigate this problem by reporting both per protocol and ITT results: efficacy and effectiveness, respectively. However, Dr. Allison suggests that this isn’t a perfect fix: “ITT can estimate the effect of assignment, not treatment per se, in an unbiased manner, whereas the per protocol analysis can only estimate in a way that allows the possibility for bias.” (As noted earlier, this article details efficacy vs. effectiveness of time-restricted eating.) C) Errors in the reporting of randomization 9 | Failing to fully describe randomization Investigators must provide sufficient information so that readers can fully comprehend and evaluate the methods used for randomization. The review authors themselves admit to having a history of inadequate reporting of randomization methods. 10 | Failing to properly communicate inferences from randomized studies

Page | 9


When following the ITT principle, an RCT tests the effect of assigning participants to a treatment on the outcome of interest, but investigators often communicate results as the effect of the treatment itself (meaning, how well the treatment works if followed exactly as it’s prescribed). Avoidance of this error depends on conscientious framing of the precise causality question addressed by the study. For example, in the article I wrote reviewing a time-restricted eating trial, I highlighted the investigators’ statement that, “Time-restricted eating, in the absence of other interventions, is not more effective in weight loss than eating throughout the day.” In actuality, the investigators found that being assigned to time-restricted eating, in the absence of other interventions, is not more effective in weight loss than being assigned to eating throughout the day. § The review from David Allison and his colleagues highlights that while randomized controlled trials are powerful tools for examining cause-and-effect relationships, they are not immune to errors and bias. The paper is a great reminder of the high level of rigor involved in designing, conducting, and reporting randomized experiments, as well as a useful guide for investigators and readers alike for avoiding many pitfalls associated with this study design. Source: https://peterattiamd.com/ten-errors-in-randomized-experiments/

The Mental Models of Human Nature and Judgment By Shane Parish, no date 1. Trust Fundamentally, the modern world operates on trust. Familial trust is generally a given (otherwise we’d have a hell of a time surviving), but we also choose to trust chefs, clerks, drivers, factory workers, executives, and many others. A trusting system is one that tends to work most efficiently; the rewards of trust are extremely high. 2. Bias from Incentives Highly responsive to incentives, humans have perhaps the most varied and hardest to understand set of incentives in the animal kingdom. This causes us to distort our thinking when it is in our own interest to do so. A wonderful example is a salesman truly believing that his product will improve the lives of its users. It’s not merely convenient that he sells the product; the fact of his selling the product causes a very real bias in his own thinking.

Page | 10


3. Pavlovian Association Ivan Pavlov very effectively demonstrated that animals can respond not just to direct incentives but also to associated objects; remember the famous dogs salivating at the ring of a bell. Human beings are much the same and can feel positive and negative emotion towards intangible objects, with the emotion coming from past associations rather than direct effects. 4. Tendency to Feel Envy & Jealousy Humans have a tendency to feel envious of those receiving more than they are, and a desire “get what is theirs” in due course. The tendency towards envy is strong enough to drive otherwise irrational behavior, but is as old as humanity itself. Any system ignorant of envy effects will tend to self-immolate over time. 5. Tendency to Distort Due to Liking/Loving or Disliking/Hating Based on past association, stereotyping, ideology, genetic influence, or direct experience, humans have a tendency to distort their thinking in favor of people or things that they like and against people or things they dislike. This tendency leads to overrating the things we like and underrating or broadly categorizing things we dislike, often missing crucial nuances in the process. 6. Denial Anyone who has been alive long enough realizes that, as the saying goes, “denial is not just a river in Africa.” This is powerfully demonstrated in situations like war or drug abuse, where denial has powerful destructive effects but allows for behavioral inertia. Denying reality can be a coping mechanism, a survival mechanism, or a purposeful tactic. 7. Availability Heuristic One of the most useful findings of modern psychology is what Daniel Kahneman calls the Availability Bias or Heuristic: We tend to most easily recall what is salient, important, frequent, and recent. The brain has its own energy-saving and inertial tendencies that we have little control over – the availability heuristic is likely one of them. Having a truly comprehensive memory would be debilitating. Some subexamples of the availability heuristic include the Anchoring and Sunk Cost Tendencies. 8. Representativeness Heuristic

Page | 11


The three major psychological findings that fall under Representativeness, also defined by Kahneman and his partner Tversky, are: a. Failure to Account for Base Rates An unconscious failure to look at past odds in determining current or future behavior. b. Tendency to Stereotype The tendency to broadly generalize and categorize rather than look for specific nuance. Like availability, this is generally a necessary trait for energy-saving in the brain. c. Failure to See False Conjunctions Most famously demonstrated by the Linda Test, the same two psychologists showed that students chose more vividly described individuals as more likely to fit into a predefined category than individuals with broader, more inclusive, but less vivid descriptions, even if the vivid example was a mere subset of the more inclusive set. These specific examples are seen as more representative of the category than those with the broader but vaguer descriptions, in violation of logic and probability. 9. Social Proof (Safety in Numbers) Human beings are one of many social species, along with bees, ants, and chimps, among many more. We have a DNA-level instinct to seek safety in numbers and will look for social guidance of our behavior. This instinct creates a cohesive sense of cooperation and culture which would not otherwise be possible but also leads us to do foolish things if our group is doing them as well. 10. Narrative Instinct Human beings have been appropriately called “the storytelling animal” because of our instinct to construct and seek meaning in narrative. It’s likely that long before we developed the ability to write or to create objects, we were telling stories and thinking in stories. Nearly all social organizations, from religious institutions to corporations to nation-states, run on constructions of the narrative instinct. 11. Curiosity Instinct We like to call other species curious, but we are the most curious of all, an instinct which led us out of the savanna and led us to learn a great deal about the world around us, using that information to create the world in our collective minds. The curiosity

Page | 12


instinct leads to unique human behavior and forms of organization like the scientific enterprise. Even before there were direct incentives to innovate, humans innovated out of curiosity. 12. Language Instinct The psychologist Steven Pinker calls our DNA-level instinct to learn grammatically constructed language the Language Instinct. The idea that grammatical language is not a simple cultural artifact was first popularized by the linguist Noam Chomsky. As we saw with the narrative instinct, we use these instincts to create shared stories, as well as to gossip, solve problems, and fight, among other things. Grammatically ordered language theoretically carries infinite varying meaning. 13. First-Conclusion Bias As Charlie Munger famously pointed out, the mind works a bit like a sperm and egg: the first idea gets in and then the mind shuts. Like many other tendencies, this is probably an energy-saving device. Our tendency to settle on first conclusions leads us to accept many erroneous results and cease asking questions; it can be countered with some simple and useful mental routines. 14. Tendency to Overgeneralize from Small Samples It’s important for human beings to generalize; we need not see every instance to understand the general rule, and this works to our advantage. With generalizing, however, comes a subset of errors when we forget about the Law of Large Numbers and act as if it does not exist. We take a small number of instances and create a general category, even if we have no statistically sound basis for the conclusion. 15. Relative Satisfaction/Misery Tendencies The envy tendency is probably the most obvious manifestation of the relative satisfaction tendency, but nearly all studies of human happiness show that it is related to the state of the person relative to either their past or their peers, not absolute. These relative tendencies cause us great misery or happiness in a very wide variety of objectively different situations and make us poor predictors of our own behavior and feelings. 16. Commitment & Consistency Bias As psychologists have frequently and famously demonstrated, humans are subject to a bias towards keeping their prior commitments and staying consistent with our prior selves when possible. This trait is necessary for social cohesion: people who

Page | 13


often change their conclusions and habits are often distrusted. Yet our bias towards staying consistent can become, as one wag put it, a “hobgoblin of foolish minds” – when it is combined with the first-conclusion bias, we end up landing on poor answers and standing pat in the face of great evidence. 17. Hindsight Bias Once we know the outcome, it’s nearly impossible to turn back the clock mentally. Our narrative instinct leads us to reason that we knew it all along (whatever “it” is), when in fact we are often simply reasoning post-hoc with information not available to us before the event. The hindsight bias explains why it’s wise to keep a journal of important decisions for an unaltered record and to re-examine our beliefs when we convince ourselves that we knew it all along. 18. Sensitivity to Fairness Justice runs deep in our veins. In another illustration of our relative sense of wellbeing, we are careful arbiters of what is fair. Violations of fairness can be considered grounds for reciprocal action, or at least distrust. Yet fairness itself seems to be a moving target. What is seen as fair and just in one time and place may not be in another. Consider that slavery has been seen as perfectly natural and perfectly unnatural in alternating phases of human existence. 19. Tendency to Overestimate Consistency of Behavior (Fundamental Attribution Error) We tend to over-ascribe the behavior of others to their innate traits rather than to situational factors, leading us to overestimate how consistent that behavior will be in the future. In such a situation, predicting behavior seems not very difficult. Of course, in practice this assumption is consistently demonstrated to be wrong, and we are consequently surprised when others do not act in accordance with the “innate” traits we’ve endowed them with. 20. Influence of Stress (Including Breaking Points) Stress causes both mental and physiological responses and tends to amplify the other biases. Almost all human mental biases become worse in the face of stress as the body goes into a fight-or-flight response, relying purely on instinct without the emergency brake of Daniel Kahneman’s “System 2” type of reasoning. Stress causes hasty decisions, immediacy, and a fallback to habit, thus giving rise to the elite soldiers’ motto: “In the thick of battle, you will not rise to the level of your expectations, but fall to the level of your training.”

Page | 14


21. Survivorship Bias A major problem with historiography – our interpretation of the past – is that history is famously written by the victors. We do not see what Nassim Taleb calls the “silent grave” – the lottery ticket holders who did not win. Thus, we over-attribute success to Page | 15 things done by the successful agent rather than to randomness or luck, and we often learn false lessons by exclusively studying victors without seeing all of the accompanying losers who acted in the same way but were not lucky enough to succeed. 22. Tendency to Want to Do Something (Fight/Flight, Intervention, Demonstration of Value, etc.) We might term this Boredom Syndrome: Most humans have the tendency to need to act, even when their actions are not needed. We also tend to offer solutions even when we do not have knowledge to solve the problem. 23. Falsification / Confirmation Bias What a man wishes, he also believes. Similarly, what we believe is what we choose to see. This is commonly referred to as the confirmation bias. It is a deeply ingrained mental habit, both energy-conserving and comfortable, to look for confirmations of long-held wisdom rather than violations. Yet the scientific process – including hypothesis generation, blind testing when needed, and objective statistical rigor – is designed to root out precisely the opposite, which is why it works so well when followed. The modern scientific enterprise operates under the principle of falsification: A method is termed scientific if it can be stated in such a way that a certain defined result would cause it to be proved false. Pseudo-knowledge and pseudo-science operate and propagate by being unfalsifiable – as with astrology, we are unable to prove them either correct or incorrect because the conditions under which they would be shown false are never stated. Source: https://fs.blog/mental-models/#human_nature_and_judgment

Psychology Experts vs. Elites "When an academic wins a Nobel prize, they have achieved a pinnacle of expertise. At which point they often start to wax philosophic, and writing op-eds. They seem to be making a bid to become an elite. Because we all respect and want to associate with


elites far more than with experts. Elites far less often lust after becoming experts, because we are often willing to treat elites as if they are experts." — Experts vs. Elites

We tend to think that what we think is true. By Shane Parish, 17 October 2021, via email We tend to think that what we think is true. And because we think something is true, we ignore information that might tell us it’s not true. Charles Darwin deliberately looked for thoughts that disagreed with his own. He wrote, “whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones.” Darwin was out for truth, not to confirm his view of the world. “If someone is able to show me that what I think or do is not right, I will happily change,” Marcus Aurelius said. “For I seek the truth, by which no one ever was truly harmed. Harmed is the person who continues in his self-deception and ignorance.” Surprises alert you to flawed thinking. When results are not what you expected. When facts disagree with you. When someone does something unexpected. “What surprise tells you,” my friend Adam Robinson says, "is that your model of the world is incorrect.” And when your model of the world is incorrect, you need to figure out why. When you catch yourself saying “that doesn’t make any sense,” “that shouldn’t happen,” “I didn’t expect that,” you’re surprised. That’s your cue to pay attention. Surprises are a clue that you’re missing something. Dive and figure out what.

Learning to Think Better By Shane Parish, No date The quality of our thinking is proportional to the models in our head and their usefulness in the situation at hand. The more models you have—the bigger your toolbox—the more likely you are to have the right models to see reality. It turns out that when it comes to improving your ability to make decisions variety matters. Most of us, however, are specialists. Instead of a latticework of mental models, we have a few from our discipline. Each specialist sees something different. By default, a typical Engineer will think in systems. A psychologist will think in terms of incentives. A

Page | 16


biologist will think in terms of evolution. By putting these disciplines together in our head, we can walk around a problem in a three-dimensional way. If we’re only looking at the problem one way, we’ve got a blind spot. And blind spots can kill you. Here’s another way to think about it. When a botanist looks at a forest they may focus on the ecosystem, an environmentalist sees the impact of climate change, a forestry engineer the state of the tree growth, a business person the value of the land. None are wrong, but neither are any of them able to describe the full scope of the forest. Sharing knowledge, or learning the basics of the other disciplines, would lead to a more wellrounded understanding that would allow for better initial decisions about managing the forest. In a famous speech in the 1990s, Charlie Munger summed up the approach to practical wisdom through understanding mental models by saying: “Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.” Source: https://fs.blog/mental-models/#learning_to_think_better

Unreliable Memory By Robert Glazer. Founder & CEO, Acceleration Partners, 16 September 2021 I remember those five minutes very clearly, as if it were a scene from a movie I’ve seen many times. My wife and I were heading to work together on a sunny September morning when we stopped at a Dunkin’ Donuts to grab coffee near our home. As I waited in line, the employees were chattering about something; when I stepped up to order I heard one of them say “a plane just hit the World Trade Center.” My initial assumption was that a small private plane had gone off course in a tragic accident. However, as we returned to our car and turned on the radio, I realized how wrong I was, and the gravity of the situation became clear.

Page | 17


That’s my vivid memory of how I first learned about the 9/11 attacks 20 years ago. But that recollection may not be correct. My experience is what is known as a flashbulb memory, a highly vivid and detailed 'snapshot' of a moment in which a consequential, surprising and emotionallyaffecting piece of news was learned. While flashbulb memories are vivid, they aren’t always accurate. As time passes, our minds confuse details of our past experiences and sometimes even combine multiple events into one memory. Our brains process those changes and create what is effectively a re-edited version of a video in our mental library. As a more mundane example, years ago I recounted a specific memory to my wife about how I visited a friend at their ski lodge rental at a specific complex in New Hampshire. While I had no doubt about my recollection, my wife pointed out that she believed my friend had never rented that ski place. She was right; somehow, I had merged the memory of that rental house with a different memory of a weekend with my friend at another place altogether. One of the most comprehensive studies of flashbulb memory was actually conducted after 9/11. A group of researchers interviewed people about their experience 10 days after the event, then conducted a follow-up interview annually for several ensuing years. The subjects were asked to rate their confidence in their memories each time, with respondents’ confidence scores throughout the study averaging out to greater than four out of five. However, when the subjects’ memories were compared to the initial survey taken within 10 days of 9/11, there were significant inconsistencies. Researchers found that only about 66 percent of subjects’ memories were accurate a year after 9/11. The participants were especially unreliable at remembering their emotions after 9/11; these recollections were accurate only 40 percent of the time just one year after the tragedy. In other words, while many people might remember their 9/11 experience accurately, a majority of people do not even recall their emotional reaction on the day, according to the study. Another interesting finding from the study was that once a participant added an incorrect detail into their memory, that error was likely to be repeated in later accounts, rather than corrected. In a sense, their memory replicated the error to create a new version of events.

Page | 18


One of the greatest challenges we face today is that people seem unwilling to challenge or change their assumptions and ideas. We rarely question our understanding of the world, even in the face of factual evidence, and this tendency is only exacerbated by the confirmation bias we experience in our own digital echo chambers. But if we have scientific evidence that even our memory of one of the most vivid, lifechanging events of the 21st century is unreliable, we really should ask: what if our memories of the experiences that have shaped some our most closely-held beliefs aren’t even accurate? It never hurts to take a step back and question our own assumptions and beliefs. There is a lot we may be missing. What memory or experience should you consider questioning? Source: https://www.robertglazer.com/friday-forward/flashbulb-memory-flaw/

Making Better Decisions "The thing that’s very clear is that when people hear information that comports with whatever their tribe believes, or whatever their tribe supports, they’re willing to accept it without doing a lot of digging into the quality of the source, the quality of the information, the implications of the rest of the information that goes with it. Anything that challenges what their tribe believes they are going to be more dismissive of whether or not it comes from a quality source." — Making Better Decisions with Todd Simkin

Misinformation Why Misinformation Spreads By Leanna M. W. Lui, HBSc, 10 September 2021 Over the past 16 months, the COVID-19 pandemic has highlighted not only our vulnerability to disease outbreaks but also our susceptibility to misinformation and the dangers of "fake news." In fact, COVID-19 is not a pandemic but rather a syndemic of viral disease and misinformation. In the current digital age, there is an abundance of information at our fingertips. This has resulted in a surplus of accurate as well as inaccurate information ― information that is subject to the various biases we humans are subject to. Bias plays a significant role in the processing and interpretation of information. Our decision making and cognition are colored by our internal and external environmental

Page | 19


biases, whether through our emotions, societal influences, or cues from the "machines" that are now such an omnipresent part of our lives. Let's break them down: •

Emotional bias: We're only human, and our emotions often overwhelm objective Page | 20 judgment. Even when the evidence is of low quality, emotional attachments can deter us from rational thinking. This kind of bias can be rooted in personal experiences. Societal bias: Thoughts, opinions, or perspectives of peers are powerful forces that may influence our decisions and viewpoints. We can conceptualize our social networks as partisan circles and "echo chambers." This bias is perhaps most evident in various online social media platforms. Machine bias: Our online platforms are laced with algorithms that tailor the content we see. Accordingly, the curated content we see (and, by extension, the less diverse content we view) may reinforce existing biases, such as confirmation bias.

Although bias plays a significant role in decision making, we should also consider intuition vs deliberation ― and whether the "gut" is a reliable source of information. Intuition vs Deliberation: The Power of Reasoning The dual process theory suggests that thought may be categorized in two ways: system 1, referred to as rapid, intuitive, or automatic thinking (which may be a result of personal experience); and system 2, referred to as deliberate or controlled thinking (ie, reasoned thinking). System 1 vs system 2 may be conceptualized as fast vs slow thinking. Let's use the Cognitive Reflection Test to illustrate the dual process theory. This test measures the ability to reflect and deliberate on a question and to forgo an intuitive, rapid response. One of the questions asks: "A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?" A common answer is that the ball costs $0.10. However, the ball actually costs $0.05. The common response is a "gut" response, rather than an analytic or deliberate response. This example can be extrapolated to social media behavior, such as when individuals endorse beliefs and behaviors that may be far from the truth (eg, conspiracy ideation). It is not uncommon for individuals to rely on intuition, which may be incorrect, as a driving source of truth. Although one's intuition can be correct, it's important to be careful and to deliberate.


But would deliberate engagement lead to more politically valenced perspectives? One hypothesis posits that system 2 can lead to false claims and worsening discernment of truth. Another, and more popular, account of classical reasoning says that more thoughtful engagement (regardless of one's political beliefs) is less susceptible to false news (eg, hyperpartisan news). Additionally, having good literacy (political, scientific, or general) is important for discerning the truth, especially regarding events in which the information and/or claims of knowledge have been heavily manipulated. Are Believing and Sharing the Same? Interestingly, believing in a headline and sharing it are not the same. A study that investigated the difference between the two found that although individuals were able to discern the validity of headlines, the veracity of those headlines was not a determining factor in sharing the story on social media. It has been suggested that social media context may distract individuals from engaging in deliberate thinking that would enhance their ability to determine the accuracy of the content. The dissociation between truthfulness and sharing may be a result of the "attention economy," which refers to user engagement of likes, comments, shares, and so forth. As such, social media behavior and content consumption may not necessarily reflect one's beliefs and may be influenced by what others value. To combat the spread of misinformation, it has been suggested that proactive interventions ― "prebunking" or "inoculation" ― are necessary. This idea is in accordance with the inoculatiion theory, which suggests that preexposure can confer resistance to challenge. This line of thinking is aligned with the use of vaccines to counter medical illnesses. Increasing awareness of individual vulnerability to manipulation and misinformation has also been proposed as a strategy to resist persuasion. The age-old tale of what others think of us versus what we believe to be true has existed long before the viral overtake of social media. The main difference today is that social media acts as a catalyst for pockets of misinformation. Although social media outlets are cracking down on "false news," we must consider what criteria should be employed to identify false information. Should external bodies regulate our content consumption? We are certainly entering a gray zone of "wrong" vs "right." With the overabundance of information available online, it may be the case of "them" vs "us" ― that is, those who do not believe in the existence of misinformation vs those who do. Source: https://www.medscape.com/viewarticle/958511?spon=12&uac=17144PJ&im pID=3655207&sso=true&faf=1&src=WNL_mdpls_210921_mscpedit_psych

Page | 21


Big Tech and Vaccine Misinformation By Katie Jennings, 29 September 2021 Big Tech’s piecemeal response to dealing with the spread of medical-related misinformation continued today with YouTube becoming the latest company to ban “harmful vaccine content” of any kind from videos posted on the site -- not just those related to Covid-19. “We’ve steadily seen false claims about the coronavirus vaccines spill over into misinformation about vaccines in general,” the company said in a statement. The move comes two months after the Biden administration specifically pointed the finger at technology companies and issued a “misinformation advisory.” The YouTube ban extends to several members of the Disinformation Dozen, a label given by the Center for Countering Digital Hate to the most prominent vectors of vaccine untruths, Forbes’ Graison Dangor reports. That includes the channel for the Children’s Health Defense Fund, the board of which is chaired by Robert F. Kennedy, Jr. who, long before the Covid-19 pandemic, peddled the claim that vaccines cause autism. When asked why YouTube didn’t act sooner, a company executive told The Washington Post “developing robust policies takes time.” Any questions? Send Katie Jennings an email at kjennings@forbes.com. She rocks. Source: https://mail.google.com/mail/u/0?ui=2&ik=9bfca98cee&view=lg&permmsgi d=msg-f:1712279075218543769

Unproven, dangerous and selling fast By Fiona Rutherford, 30 September 2021 Ivermectin has become a lightning rod in the Covid culture wars. Despite safety concerns and a rash of poisonings, the antiparasitic drug has been promoted by fringe groups, celebrities and social media as a treatment for the virus. Both the U.S. Food and Drug Administration and U.S. Centers for Disease Control and Prevention warn against using ivermectin to treat Covid. That hasn’t stopped some health-care professionals from being convinced that it works. Catherine Moring, president of Mississippi’s Public Health Association, turned to ivermectin when she tested positive. She gathered information from newsletters and podcasts, read research papers and spoke to doctors she knew. While the drug is often used to treat worms in livestock and domestic animals, some tests have shown it can also reduce viral load in humans. But many studies show ivermectin’s benefits for Covid patients are small and lack good evidence, according to

Page | 22


a recent review by the Cochrane Infectious Diseases Group, which evaluates medical practices. Moring turned to a group of physicians and advocates called the Front Line Covid-19 Critical Care Alliance for help getting a prescription for the drug. Some people who are Page | 23 unable to get prescriptions for ivermectin are taking animal formulations, which can be easier to acquire. In an echo of the controversy over hydroxychloroquine — the malaria drug touted by former President Donald Trump as a Covid “game-changer” — ivermectin sales have soared in recent months, and poisonings have risen in tandem. Outpatient prescriptions rose more than 24-fold from pre-pandemic levels to 88,000 a week in the seven days ending Aug. 13. And there have been roughly 1.2 million retail prescriptions written this year for the drug, compared with 340,000 in 2020, according to data provider Symphony Health. The run on ivermectin has strained some veterinary practices, which rely on the medicine to treat horses, cows and other livestock. While many social-media platforms have taken steps to prevent the spread of misinformation about the drug, it doesn’t always work. Ivermectin-focused groups still exist and are gaining members, according to Media Matters Associate Research Director Kayla Gogarty. Some posters are able to get around restrictions just by avoiding mentioning the drug by name. Source: https://link.mail.bloombergbusiness.com/click/25191100.12302/aHR0cHM 6Ly93d3cuYmxvb21iZXJnLmNvbS9wcm9nbm9zaXM/55089ac93b35d034698d810fBd b88cad8

Vaccines COVID Vaccine Myths, Questions, and Rumors By Rhonda Patrick and Roger Seheult, 17 September 2021 For those of you that don't know, MedCram has been a particularly uniquely authoritative and influential source of COVID-19 information since the early pandemic through their incredibly comprehensive COVID-19 updates covering emerging research. Their video on Covid-19 and vitamin D alone has received over 12 million views. Not only that, but so very often, when people cite protective strategies that might be an alternative to vaccination, they're often unwittingly citing strategies directly proposed


and popularized by Dr. Roger Seheult from his own COVID-19 updates series. When vaccination wasn't an option, he was giving people leads... and he still follows the evidence in every update they release. There are loud, strident voices when it comes to anything COVID-19. Some of these voices make broad and scientifically inaccurate proclamations with surprising confidence. In contrast, we speak to the facts. In other words, to the best of our ability, we have endeavored to do the EXACT opposite: to make a thoughtful, merit-based discussion inclusive of realistic cost-benefit analysis while acknowledging trade-offs. Listen on Apple Podcasts, Spotify, or your favorite podcast player, they discuss... · 00:01:20 - Should young and healthy get vaccinated? · 00:06:47 - Risk of myocarditis · 00:10:40 - Long-haul COVID · 00:19:58 - Spike protein cytotoxicity · 00:35:39 - COVID-19 Vaccine Adverse Event Reporting System (VAERS) · 01:01:17 - Antibody-dependent enhancement? · 01:09:16 - Do COVID vaccines damage fertility? · 01:14:13 - Will mRNA vaccines alter DNA? · 01:22:32 - Are alternatives like ivermectin as effective as the vaccine? · 01:42:02 - Do vaccines prevent Delta transmission? · 01:56:04 - Will the virus become more deadly due to vaccines? · 02:05:07 - T-cell immunity vs. antibody immunity · 02:08:34 - Long term side effects / were vaccines rushed? While I would like this episode to be "definitive" in it's two-and-a-half-hour discussion, the reality is that science is always bringing to light new details. By introducing some of you to Dr. Seheult through the sharing of this discussion we had together, I hope that many of you will feel that you now have a new highly apolitical and unbiased source of COVID-19 information.

Page | 24


We would ALL do better to put aside partisan thinking. If you come up on the other side of any of the issues we discuss, I ask that you treat this discussion with the thoughtfulness, patience, and humility it deserves by listening to the full conversation, if possible. Source: https://youtu.be/pp-nPZETLTo

Perspective By Chris Stout, PsyD, 16 October 2021 We get our dogs, cats and horses vaccinated. Many of us have been vaccinated against mumps, measles, rubella, diphtheria, whooping cough, tetanus, polio, smallpox, yellow fever, typhoid, hepatitis B, pneumonia, shingles, gamma globulin to resist hepatitis A, and the flu. There has been not nearly the hubbub with any of these. It’s a curious difference.

Prosocial Ways to Respond Seeking Understanding By Robert Glazer. Founder & CEO, Acceleration Partners, 28 February 2019 Last week, I received over 100 e-mails from around the world in response to my Love and Hate Friday Forward. One of them was from the founder of TEDx Kenmore Square, Noah Siegel, who pointed me to a Ted Talk that I watched with great interest titled “Why I Have Coffee with People Who Send Me Hate Mail.” The creator of the Ted Talk, Özlem Cekic, was born in Turkey with Kurdish roots. In 2017, she became one of the first women with a Muslim immigrant background to be elected to the Danish parliament. Almost immediately after joining parliament, her e-mail began to fill with hate mail that included xenophobic comments and questions such as “What’s a raghead like you doing in our parliament?” Understandably, she deleted the e-mails and assumed the senders were unreasonable fanatics and racists. A friend even suggested she save the e-mails so that if something happened to her, the police would have leads to go off. But another friend took a different perspective, suggesting to her that she reach out to the writers and invite them to meet for coffee.

Page | 25


After giving it real thought, Cekic decided to give it a try. She truly wanted to understand how people could hate her so vehemently without even knowing her. She started by reaching out to the person who had sent her the most hate mail, a man named Ingolf, and asked if she could meet him at his home. To her surprise, he agreed, and a few weeks later she was sitting in his house over cups Page | 26 of coffee. They talked for over 2.5 hours. By the end, they both realized that their similarities far outweighed their differences. This first meeting inspired her to have more of these coffee chats, which she named “Dialogue Coffee.” These now number in the hundreds. To convey trust, she always meets the people in their house and brings food to help discover what they have in common. Throughout this journey, Cekic came to realize she had been just as judgmental of those who had sent the hate mail as they had been of her. By sitting down with them over coffee and simply talking, she’s learned how to separate the hateful viewpoints from the person expressing them in order to gain perspective. One of the biggest lessons she’s learned from her Dialogue Coffee is that people are afraid of people they don’t know. She’s also become much more aware of how dangerous generalizations are and how it can lead to demonizing entire groups of people. In her conversations, Cekic heard time and again from the people she spoke with that they believe “other people” are to blame for spreading hate and perpetuating negative stereotypes. When she asked them about their role and what they could do to help to stop it, they often respond with “Me? I have no power.” What I’ve taken from Cekic’s experience is that even generalized terms – “the left,” “the right,” “the socialists,” “the deplorables,” “ the ____ media” – are actually verbal weapons of mass social destruction. By painting entire groups of people with a broad stroke, these generalizations exacerbate fear and hate far more than we imagine. Yet, simply turn on the news and these terms are increasingly pervasive. We need our leaders to stop generalizing and using terms specifically designed to incite hate, distrust or fear. Instead, we should be encouraging respectful disagreement and dialogue, embracing disagreement and seeking to understand where other people are coming from. In doing so, we might learn how the person who believes in more equal distribution of wealth does so because they grew up homeless and were forced to work below minimum wage. Or that the person who does not trust government to be the arbitrator


of wealth or resources holds this perspective because their family’s business had been taken away by the government. It’s time – now – for us all to take responsibly for our generalizations. Let’s follow the lead of Özlem Cekic and engage in dialogue that does not demonize people or entire groups. Cekic credits her friendships with many different types of people and her Dialogue Coffee in particular for having “vaccinated her against her own prejudices.” Disagree? Let’s have coffee. Source: https://www.robertglazer.com/friday-forward/seeking-understanding/ #

#

#

If you'd like to learn more or connect, please do, just click here. You can join my email list to keep in touch. Tools and my podcast are available via http://ALifeInFull.org. Click here for a free subscription to our Weekly LinkedIn Newsletter If you liked this article, you may also like: Special Edition – Strong Opinions Loosely Held: Part 2 - Misinformation, Vaccines, and Prosocial Ways to Respond Special Edition – Strong Opinions Loosely Held: Part 1 - Science and Psychology The Reproducibility Problem in Science—What’s a Scientist to do? (Part 3 in a series of 3) The Reproducibility Problem in Science—Shame on us? (Part 2 in a series of 3) The Reproducibility Problem—Can Science be Trusted? (Part 1 in a series of 3) Can AI Really Make Healthcare More Human—and not be creepy? How to Protect Yourself from Fad Science Technology Trends in Healthcare and Medicine: Will 2019 Be Different? Commoditization, Retailization and Something (Much) Worse in Medicine and Healthcare

Page | 27


Fits and Starts: Predicting the (Very) Near Future of Technology and Behavioral Healthcare Why I think 2018 will (Finally) be the Tipping Point for Medicine and Technology Healthcare Innovation: Are there really Medical Unicorns? Can (or Should) We Guarantee Medical Outcomes? A Cure for What Ails Healthcare's Benchmarking Ills? Why Global Health Matters

Page | 28


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.