Flourishing

Stealing the Scientific Superpower

Written by Joel Lehman on

Daniel then said to the guard [...] "Give us nothing but vegetables to eat and water to drink. Then compare our appearance with that of the young men who eat the royal food [...]." So he agreed to this and tested them for ten days. At the end of the ten days they looked healthier and better nourished than any of the young men who ate the royal food. So the guard took away their choice food and the wine they were to drink and gave them vegetables instead.

Daniel 1:11-16 (The Bible, New International Version)

Maybe if you were Daniel you'd have gone with wine over water, or beef over beans -- but this biblical experiment highlights the core of the scientific method: It's hard to argue with results. Put your money where your mouth is. Do an experiment and let the actual results speak. This essay aims to show you science's superpower, and to convince you that it's valuable and applicable to your own life.

Let's remind ourselves of what oceans of suffering science has helped us to drain. Our best guess at how long the average Bronze Age man lived was just 26 years. Lucky for me (turning 40 soon!) that things have changed. As of 2015, an American can expect to live almost 79 years -- a 300 percent increase! We owe that tripling of our lifespan to the ever-expanding understanding of human health, a debt we owe to science. This includes advances like antibiotics, sanitation, vaccines, cancer treatments, and complicated surgeries. Beyond living longer, we also live better. Science is what keeps planes in the sky, what powers our computers, and what relays our voices cross-country to those we love.

Science works. If you had cancer, and could only pick one choice: Would you go to a science-minded medical doctor, or would you see a new-age energy healer? If you were traveling from Ohio to New York, would you rather fly on an airplane designed by professional engineers -- or one made by a weekend tinkerer? When things need to be precise and correct, and your life's on the line, we already act like science is the best bet. It's not that gut judgment calls or intuition aren't important. If you're driving and think you see a child in the middle of the road, slam on the breaks, obviously! That's not the time to run a four-week mathematical analysis. But when it comes to hard facts and there's time enough to look it up or do the math -- well, then shoot-from-the-hip gut instinct can't compete. How could my one-second estimate of how many gumballs are in a glass jar be more accurate than counting each of them one by one?

The point isn't to worship science, but to appreciate that it has a superpower we'd like to absorb. Over time, science tends to get things more right. We make new antibiotics, we invent faster computers, we build safer cars. And if you and I could get things right more often, we'd be better at living the lives we desire. We'd help ourselves unpave the road to hell, by better aligning our intentions with reality. So the question is, how does science tend to get the right answer, and how can we absorb that ability to improve our own lives?

The Weedwacker of Truth

Maybe your first impression of science is an elite club of arrogant folks in white lab coats, playing with test tubes, laughing and high-fiving about how fun it is to talk down to the rest of us. Or maybe you think about physics class, where the teacher had you time how long objects took to hit the ground, and a bad grade awaited if you got bored with following the lesson plan or dared to demonstrate creativity. But the real scientific process is much more creative than high school chemistry class, more like a blues guitarist creating a new riff than paint-by-the-numbers, more like the cut-throat politics of Game of Thrones than the obedient student following the teacher in lock-step.

In science, no one person or group is in control -- in some ways it's anarchy, riding the edge of chaos. The way to fame in scientific history is to disprove what "everybody knows." Think Einstein's theory of general relativity is wrong? Well, recognition and respect are yours if you can show it! Everyone tries to disprove everybody else -- take nothing on faith. We can view science as the weedwacker of truth -- it's designed to cut towards truth by pruning away the weeds of bullshit. That's the key magic that makes science work: No thought is off limits if you can back it up with data. Of course, if someone tries to disprove your prized pet theory, you might not be happy about it, and try to play politics to undermine your opponents -- to let your weeds spread and grow taller. But over time, the strongest evidence eventually wins out. If you're wrong, no matter how tall the weeds, as the data accumulates, the weedwacker eventually cuts you down.

How does this work in practice? Back in the 1600s, most people believed in "spontaneous generation," the theory that sawdust spontaneously turns into fleas, or that rancid meat spontaneously transforms into maggots. We might think that's silly now, but it made sense: People saw maggots growing in meat, but not the tiny eggs that flies were laying there. It seemed like maggots were sprouting right out of the meat. An Italian scientist, Francesco Redi, wasn't sold -- he thought that life doesn't easily emerge from non-life. Beyond being a skeptic who disproved myths -- like that viper venom is poisonous if drank (it's not!) -- he also was an accomplished poet.

Francesco devised an experiment where competing theories predict different outcomes. He put raw meat in jars. Some of the jars, he left open. Some of them, he covered. Spontaneous generation predicts that maggots should appear in all of the jars. But Francesco thought maggots should only appear in the open ones, where flies could land on the meat and lay eggs. In either case, his laboratory must have smelled terrible. Francesco repeated the test many times, and each time, maggots appeared only in the open jars -- a strike against spontaneous generation. The weedwacker of truth in action.

We'll talk more later about how this move can help your own personal life. But for now, let's dive into an important puzzle -- one that's critical for understanding the scientific superpower, but that's rarely well-explained. This is the mystery: If I perform an experiment and get exciting results, how can the rest of the world learn from what I've done -- without anyone needing to take it on faith? Does science require the honesty of everyone involved -- which maybe is too much to ask from flawed humans?

Nobody Understands Everything

The first uncomfortable thing to confront is that no one understands all of science. To be honest, I don't even understand how power steering or a flush toilet works. It's not possible in one lifetime to understand the arcane depths of mathematics, electronics, physics, philosophy, chemistry, biology, and economics -- all together. That's scary in some ways, because no one person has an unshakable grasp on how things work. We all rely on others, somehow, to make sense of the world's complexity. But how can we rely on knowledge we haven't checked ourselves (a seeming leap of faith)?

The second insight is deeper, and more important: Humans can build factories for creating and refining knowledge, and some knowledge factories are better built than others. Imagine two car factories. One was built overnight from drawings made by finger-painting toddlers, and the other took years of careful thought from a team of top-notch engineers. If I needed to buy a car to drive my family in, I know which factory I'd want to build it. In the same way, if we can appreciate that a knowledge factory is well-built, we can be more confident in the knowledge it produces. This will be a core idea in future chapters, as we try to figure out how much trust we should put in different information sources (like websites and blogs).

In this section, though, we're building up to talking about peer review, a powerful ingredient for building reliable knowledge factories. Maybe you've heard of the term before -- here we'll describe it so you can understand why it's so useful. To be up front, it's important to acknowledge that peer review is far from "peer"-fect (sorry). Winston Churchill said that "Democracy is the worst form of government -- except for all those others that have been tried." That quote inspired many to say the same thing about Capitalism. In a similar way, peer review is currently the best ingredient we have for making a good knowledge factory, but like Democracy and Capitalism it has its own problems. To understand peer review, first we need to understand a bit more about how research scientists do their job.

Scientists and Publishing

Because they can't know or study everything, scientists specialize. Most stick to one field -- like psychology or physics, and specialize even within that field. A biologist might spend their entire career studying bat ears, or an economist might mainly study the effects of taxes in Spain. The hope is that, taken together, all the specialists' expertise will weave a wide web of deep knowledge, one that can be put to good use in the world. For example, imagine you were a doctor and encountered a patient with a rare disease (like Colorado Tick Fever). To deal with that disease, you wouldn't have to do any scientific experiments yourself. If you knew where to look, you could read an article where a specialist researcher had already discovered what treatment was needed (nothing -- it takes about one uncomfortable week to run its course). Researchers tend to present new knowledge in written papers (usually less than 20 pages each). Long ago these papers were hand-written, but of course now they are typed and available for download.

The main way that scientific knowledge expands is through journals that publish scientific studies every few months. While these used to be mailed out on paper, as you would expect, most now are read online. There are different journals for different scientific specialties, going by names like the "Journal of AI Research," "Advances in Physics," and even "The Journal of Raptor Research," (which studies birds of prey.) The purpose of journals is to record new knowledge and to keep scientists up to date with what's going on in their area of research.

When a newspaper article says that "a study shows that flossing helps dental health," now you know what they're talking about: Some scientists did some flossing experiments, sent the results to a journal, and it was published to the world in the journal's next volume. But journals don't publish every study that gets sent their way. No, their purpose is to act as a filter for good science. They aim only to publish studies with well-designed experiments and analysis. If there was no filter, and every crackpot sent in twelve pages of gibberish to a journal (with titles like "On the Relationship Between Crunchy Peanut Butter and Train Crashes," or "Perpetual Motion through Cold Fusion and Telekinesis"), journals would be overrun with nonsense, and be useful only as toilet paper.

Most of the time, the studies that scientists send in aren't published -- journals often reject them, or tell scientists to send it back in after they fix some problems. That might seem strange at first -- after all, who gets to decide whether the experiment is a good one? This is a critical issue. If there was only one person in charge of a journal, and it was just up to him, maybe only studies from the people he liked would be accepted, especially if they took him out to a fancy dinner and footed the bill.

The Brutal Genius of Peer Review

Science works because the system is designed to make corruption difficult -- it takes human flaw into account. The knowledge factory is designed to be reliable, so no single scientist gets to run the show. In fact, in a cruel but ingenious system, science pits scientists against one other, to keep everyone honest. Scientists try to find the holes in each others' studies. This gritty process is called peer review, although that's too polite a name for what it really is. A better name might be "anonymous strangers rip your precious work to shreds."

To whet your appetite, here are a few choice pieces of peer-review feedback (from the Tumblr called "Shit My Reviewers Say"): "The candidate demonstrates no understanding of the subject matter and his proposal title is misleading. Furthermore, his career prospects are unclear," "The author does not exhibit adequate acquaintance with the subject under discussion, the scholarship on it, the structure of logical argument, or the writing of English," and "The paper descends into nonsense, never to return, on line 44." Maybe you can tell that at times it gets vicious. Let's walk through an example of how it works. I'm going to take a little creative license, but the following example is roughly based on a real discovery by a real scientist.

Step 1: Perform and Record your Experiments

Let's imagine you are Niko Tinbergen, a Dutch bird scientist. You have a theory: Animals often rely on simple signals, like color or shape, when deciding how to act in the world. You think they might not really understand things deeply like we do. So you decide to perform experiments with birds, where you replace birds' real eggs, sitting in their bird nest, with plaster ones that you made yourself. You painstakingly create many fake eggs, and carefully record which eggs the birds most prefer to sit on.

It turns out, if you make a fake egg that's bigger and with brighter colors than the real eggs, the bird prefers that exaggerated fake and ignores its real eggs. This data you've collected supports your theory, because the fake eggs barely resemble real ones besides their shape and color. More than just a fun way to trick birds, you feel that this discovery is important: You call it "supernormal stimuli" when an animal likes extreme fakes more than the real things it naturally encounters. (We'll talk more about supernormal stimuli later, but as a teaser: When women put on make-up to make their cheeks redder, or when men take steroids to grow big muscles, humans are basically playing the same game!)

Step 2: Write and Submit Your Paper

The next step for you is to write a research paper (which is just a fancy name for a report). The main idea is to share your experiment and the data you collected. That way, another scientist can understand what you've done and can see the evidence you've gathered. Also, if they really wanted to make sure you weren't lying, they could recreate the experiment from your report to check the results themselves.

Importantly, when you write a research paper, you are expected to cite other people's work. This means that in your paper you reference other people's published papers. In your supernormal stimuli paper, for example, you write that this experiment was inspired by a paper that Konrad Lorenz (an Austrian bird scientist) wrote about geese. He also believed animals often rely on simple signals, and his paper showed that baby geese depend on a simple rule to identify their mother: Geese assume that their mom is the first moving object that they ever see. Konrad had baby geese imprint on him -- he made sure he was the first thing they saw, and -- they followed him around like he was their mother! Between you and Konrad, it seems like European scientists really enjoy tricking birds.

Because papers cite each other, the scientific literature as a whole forms an interconnected web. You can jump from paper to paper by following citations, and survey the important discoveries going on in one area (just like you might go on a learning binge on Wikipedia by following link after link).

After your research paper is finished, the next step is to send it off to a journal, one that deals in the same kind of research that you just performed and documented. You look around and find a Dutch journal called "The Living Nature," (you're Dutch after all) which deals with nature and animal behavior. You then submit a copy of your paper there via mail (this is before the internet).

Step 3: Wait

In the third step, you sit and wait. But behind the scenes, the action is heating up. In peer review, when you send your precious new study to a journal, the editor of the journal sends copies of your paper to respected scientists in your area of research (in this case, other bird scientists). These are your reviewers, who act as the sharp blades of science's weedwacker.

Their instructions are to comb through your study, looking for flaws and weaknesses. Maybe you made a mistake in how you collected data that makes it worthless. Or maybe you did the math wrong in your statistics. Their job is to sniff out the problems, and write a small report about whether they think the science is done properly enough to be added to the web of scientific knowledge -- by being published in the journal.

Importantly, the reviewers are anonymous. You won't be told who they are, and they won't know who else is reviewing the paper (sometimes the reviewers won't even know who you are -- in a double blind review process). So they can tell you what they really think, without fear you'll hold a grudge. They each go off alone and read your study, and send their report to the editor.

The editor then reads their reports, and makes a decision that's in line with what the reports recommend -- whether to reject your paper, to ask you to fix things and then resubmit it, or whether it should be accepted as it is. In general, the editor will do what the reviewers recommend -- it would look suspicious to the reviewers if they all recommend to reject the paper but the editor decides to accept it anyways, for example.

Step 4: Celebrate or Cry

You and the other reviewers then get all the review reports (recall that these reports are anonymous). That way, reviewers get a sense of how other people reviewed the paper, and you get to see what people really think about your work -- along with the final judgment on what happens to your paper.

You'll learn whether it will be added to the web of knowledge, or whether it's been cold-heartedly rejected. It's a heart-in-your-throat moment when you open the mail (e-mail these days). In this case, the paper is accepted, with great reviews! Congratulations on your success, Niko! But if it had been rejected, you could always try to change your paper based on the feedback you got, and then submit it to another journal and hope for better luck. And indeed, many important papers were at first rejected before they were published.

A Pretty-Okay System

Peer review isn't perfect. First, you have to hope that editors for journals are chosen well, because they wield some power. Luckily, it's tough to become an editor of an important journal. To become one you need to be an accomplished scientist, as well as to gain the respect of your community. That's no guarantee that you'll never get a bad editor, but because being an editor is prestigious, and because scientific reputation is critical to a scientist's career, generally editors take their jobs seriously and do a fairly-good job. Beyond the editors, you also have to depend on your reviewers. Sure, you might get some bad reviewers who don't know what they're talking about, or have a grudge against your hypothesis, but on average, the crazy papers that make no sense are weeded out, and studies with careful experiments and good data collection get published to the wider community of scientists.

Still, you might think, if there's a bad apple, won't the whole system fall apart? What if Dr. Jack Jerk, a disturbed scientist, shamelessly falsifies all of his experiments and data? What if Dr. Jerk's fake study looks like it contains well-designed experiments and analysis? Then it could get through peer review, and be installed into the web of knowledge. Wouldn't that undermine science and break the whole system? But here's a subtle and important facet of science -- you don't make confident conclusions from just one study, and actually, you never make completely confident conclusions, ever! This goes back to the point about how important it is to think in shades of gray, instead of pure black or pure white. Like the philosopher Voltaire said, "Uncertainty is an uncomfortable position. But certainty is an absurd one."

The reason you should never be completely certain is that science is designed to bounce back from mistakes. One study could be wrong, or misleading. So even if it looks like good science, a new paper that presents very surprising results usually leads to skepticism. For example, if Dr. Jerk's study was published, and the results were surprising or profound, other scientists would try to build off of that study. But if Dr. Jerk's results were made up, follow-up experiments would reveal that it's impossible to replicate the result. The same experiment described in Dr. Jerk's paper would produce very different results when someone else conducts it. The follow-up scientists could then publish the conflicting result, and Dr. Jerk's reputation would be in jeopardy, especially if no matter how hard other scientists try they can't recreate his results. This could really hurt Dr. Jerk's career, because honesty is a deeply-cherished virtue in science.

Replication is another critical ingredient in science. Every good study describes a recipe for recreating its data. You don't have to take anything for granted, because you could rerun the experiment, and the results should be similar. If they're not, maybe the published study wasn't described well enough, or the study's author made a mistake in collecting the results, or in rare cases, maybe there was actual fraud. But it will come to light, sooner or later, as more and more studies are accumulated. The beauty of science is that it's self-correcting. Because no one pretends that anything is known for absolute certain, there's always room for new experiments to overturn past understanding. The weed-wacker of truth gets its man eventually.

The point of talking about peer review is to give an example of the machinery that goes into creating knowledge. By understanding how peer review works, we can begin to have more confidence in the knowledge produced by peer-reviewed journals. There's been some vetting of knowledge, by a process that basically makes sense. Of course, not all information you encounter will go through such a review. For example, if I write something on my personal blog, you might have no reason to trust whatever data I present. If you know me personally, and you think I'm knowledgeable, that might be reason to place some trust in my blog, but how sure are you really that I know what I'm talking about? We'll dig more into this problem of placing trust in information sources next chapter.

But for now, we've arrived at another mystery. Science claims we should never be certain of anything, because that way we are open to new evidence and to overturning false beliefs. But this attitude of never-being-certain creates a serious complication. If nothing is ever certain, why should I get a flu vaccine? If we could overturn everything we know about physics tomorrow, how should engineers design cars today? How do I make sense of the world through the perspective of uncertainty? The answer to this question is hugely important, and relates to a misunderstood phrase: Scientific consensus.

When Scientific Gladiators Begrudgingly Agree

If nobody's in charge of science, and nothing in science is absolutely certain, what can we make of this strange phrase, "scientific consensus?" We often hear it in the media, and it stirs controversy. Doesn't it contradict the idea that nothing in science is ever settled, and that scientists are always trying to disprove one another?

If there's a scientific consensus that the flu vaccine saves lives, it sounds as if there's some committee that got together to decide the truth. But the truth isn't something that voting could ever decide, it's not a democracy. Truth is rooted in reality, and can only be as it is -- no committee can change it by decree (though many might try). From this angle, it's understandable that hammering others with "the scientific consensus" seems absurd to some -- it's just the lab-coat army on a power trip.

But deeper waters underlie "scientific consensus." Scientists have done a bad job communicating what it means, and why we should take it seriously. But it's a powerful idea, one that's critical to connecting science to the real world, and it too is something we can make good use of in our own lives. So let's try to understand it for real.

I mentioned earlier that science is a cut-throat anarchy with nobody in charge. Most scientists don't completely agree with all other scientists in their field. In fact, there are often heated rivalries between scientific labs, all of them trying to make the next big discovery. For example, in the late 1900s there were "Bone Wars" between two fiercely competitive archaeologists (named E. D. Cope and O. C. Marsh) seeking glory by discovering as many new species of dinosaur as possible. Both went bankrupt in their win-at-all-costs pursuit of science, sometimes resorting to bribery, theft, and destroying bones, to get ahead and sabotage the other guy. How could this kind of scientific battlefield create some orderly consensus?

Well, it turns out that for many scientific questions -- the ones on the wild-west frontier of science -- there's actually no scientific consensus at all. For example, it's really not clear yet what the optimal diet is if you want to lose weight. We certainly have knowledge, like that eating nothing but Twinkies is a bad idea, but there's currently disagreement among scientists who study nutrition over whether a low-fat diet or a low-carb diet is better (or maybe either will work! or that it depends on the individual). If you polled all the scientists in nutrition science, there'd be no dominant answer. In those cases, we still don't know enough -- more experiments need to be done to tease apart which theory is better.

For many scientific questions, however, there is consensus: Nearly all scientists in the relevant field agree on the answer. After many experiments have been done, after theories have been tested from many, many angles, it can become clear that one current theory better matches the evidence, no matter how hard scientists try to break it. Gravity seems like a sure bet at this point, for example, and spontaneous generation seems like a clear bust.

So even though scientists might disagree over many things, given overwhelming and consistent evidence, they'll tend to independently read the evidence similarly. Or at least they'll agree that one theory is currently the strongest. If you gave thirty mathematicians the problem one plus one, you'd expect them all to get the answer two. That's really all "scientific consensus" means -- most scientists in the field agree on the current-best answer to the question. It could be overturned tomorrow with a surprising new experiment, but for now at least, it's our best understanding of the world.

The Power of Scientific Consensus

Let me repeat: Scientific consensus isn't about claiming that the truth can be decided by scientists voting. We know truth doesn't work that way. So, even if there's scientific consensus about some topic, it doesn't guarantee that the consensus is right. But also remember, that in our quest to extinguish wrongness, we've already given up on guarantees about truth, anyways! That's the scientific way -- question everything. It doesn't prove anything, but the power of scientific consensus is that it's strong evidence: If scientists with every reason to disagree with each other actually agree, that's something to really take note of.

The scientific consensus on some question is calculated from relevant scientists. So, if we were curious about scientific consensus on diet, we would ask nutrition scientists, not those who study asteroids. Whatever field of science is most related to that question, the consensus is taken from scientists who have studied that topic more than anybody else on the planet. This leads to another way to understand the power of scientific consensus: If consensus exists on some topic, and if the whole scientific process is working properly, there may be no way to get a better guess at the answer to that question. How could you reliably do better than by surveying those with real and relevant expertise?

Earlier, I described peer review as a useful ingredient of a knowledge-factory, one that can help it to produce reliable evidence. We can see scientific consensus as peer review's natural teammate. Peer review helps to create knowledge, and looking for scientific consensus boils it all down to see if there's any agreement on the answer to a particular question. If you wanted to build a reliable airplane, you probably would want to do so from the best engineering knowledge available at the time. Rather than read through all of the many, many scientific papers yourself, you could seek out the current scientific consensus on airplane design. For example, textbooks you study in school usually attempt to present the current consensus view.

The Shifting Sands of Consensus

At this point, it's important to acknowledge an annoying wrinkle in scientific consensus: It changes. At any point in time, consensus reflects our current best state of knowledge. But new experiments might blow a hole in our current theory, and then the current consensus will be thrown out the window. For example, the consensus on how the physical world works was thrown into chaos by Einstein's discovery of relativity, and a new consensus congealed later with Einstein's discovery at its core.

One way of looking at the current scientific consensus is that it's like the bookies in Vegas setting the odds for taking bets on the outcome of some game. If one of the star players gets hurt, you can expect the odds to change, just like the consensus will change with new and surprising experiments. But it's really hard to make money betting on sports games, because the bookies are good at what they do (otherwise they'd go out of business). Similarly, it's hard to out-do the current scientific consensus, because scientists are competing with each other to try and get to the right answer, and disprove wrong ones.

In our quest for truth, we need to beware escape hatches that let us easily hold onto any of our cherished beliefs no matter what evidence we run across. The fact that the scientific consensus is a shifting beast provides just such an escape hatch. If the consensus is currently against one of our cherished beliefs, the temptation is to just think: Who cares what the consensus is right now -- we don't know it's right, and it could easily just change tomorrow. But even though it feels good, we'd be foolish to so casually sweep aside consensus. Because, the alternative is just to make our own way -- and then we have to really think about why our gut instinct trumps the lab-coat army.

If I decided I didn't trust the scientific consensus on the flu vaccine, the razor-sharp question is, what do I personally know about vaccines? Am I really capable of getting a better answer than what emerges as an agreement on the chaotic battlefield of science? There are so many experiments and data backing vaccines' effectiveness, that I'd be grasping at straws. These are people who study vaccines their entire lives, and have every reason to disagree with each other, if it means making a new discovery. You're free of course, to believe what you want, but going against scientific consensus without very good reason is a surefire path to delusion.

The larger point is that we need to be extremely careful around these kinds of "escape hatch" arguments. They are a tool to let us believe exactly what we want to believe, no matter the evidence. If we like the scientific consensus on some topic, we can say -- see, it's the scientific consensus! If we don't like the scientific consensus on another topic, we can resist changing our minds by saying -- well, yeah, maybe that's the consensus today, but it could change tomorrow, and anyways, truth isn't decided by committee!

By selectively applying our arguments this way, we get to avoid confronting our beliefs. We sacrifice truth for comfort. Because we hate to be wrong, it's a very natural thing to do. So if we really care about unpaving hell road, we need to be skeptical of our "inner lawyer" -- the part of us comfortable making arguments simply because they help you win.

How Should We Act Today Given That Consensus Could Change Tomorrow?

What should we make of the scary fact that the consensus could change tomorrow? Doesn't that make taking action in the world tricky? If today, all the economists agree that lowering taxes is a good idea, should we go ahead and lower taxes, knowing that tomorrow some new experiment could turn everything on its head?

There's no getting around it -- this is a complicated issue! We've given up on solid ground by admitting that we don't know anything for absolute certain. To be wise, we need to take our uncertainty into account when making decisions. We need to think about the effect of our decisions if it turns out the consensus was wrong, and we need to think about how strong the consensus is. There are a couple rules of thumb that can help us navigate this fog.

One is that where possible we should avoid decisions that are completely ruinous if the consensus is mistaken. If you're crossing a road and are pretty sure that the coast is clear, you could close your eyes as you march across. But it's probably wiser to keep your eyes open as you go, so you can react to a speeding Corvette in the off chance one zooms over the nearby hill. When it comes to taxes, maybe gently going in the direction the evidence suggests (a small decrease or increase) is wiser than making massive changes that could destroy the economy or bankrupt the nation if we're wrong.

Another rule of thumb is that we should take actions that help us gather more information. If we think that lowering taxes will create more jobs, let's make sure that as we go we're examining how many jobs are actually created. A related idea is that it is often best to try out changes on a small scale (like local taxes) before rolling it out as a national experiment (this idea is known as ``the laboratories of democracy''.

Perhaps most important is that to take a sober look at how strong the consensus is. We can proceed pretty confidently in assuming that gravity is real, for example -- it's nearly impossible to find a scientist who disbelieves in the effects of gravity. But when it comes to ideal tax rates, if there is any consensus, it's pretty weak. When we have a very strong consensus, based on a lot of strong evidence, we can proceed more confidently.

The reason is that science most often develops incrementally where evidence is strong. Some understandings will of course be overturned, but each theory most often refines the one it replaces, especially when it comes to what it means for us practically in the real world. Think of it this way: It's rare that consensus changes overnight and rocks how we do everything in the real world. Can you think of even one time in your lifetime where that's happened? In most cases, the new theory will hold true wherever the old theory did, but will work also in other situations, where evidence broke the old theory.

Even Einstein's discovery of relativity, which fundamentally changed our understanding of physics, didn't really change anything on a day-to-day basis -- we still design planes the same way, we still use the same old ways to design alarm clocks. It's true that the scientific consensus isn't a guarantee. It will likely shift over time, and new evidence might throw the consensus into doubt. But more often, a consensus might form where there was previously doubt, as more and more experiments and evidence accumulate. And remember, the modern world around us is evidence itself that science basically works.

Finally, now that we understand a little more about what scientific consensus is, and why it's so important and powerful, how can we find it? Is there some up-to-date list of all possible scientific questions, with the current in-favor answer for each, and the level of agreement among scientists? Unfortunately for us, it's not that simple. There are so many scientific questions, and so many scientists, that there's no simple way to continually bring all that information together.

As mentioned earlier, textbooks usually present the consensus view, so you could look to a recent textbook for a first take on consensus. But because textbooks are written by one or only a few authors, it may reflect their personal take. So, for a more reliable window on consensus, there are often polls of scientists, especially on important or controversial issues. For example, you can find surveys on how many economists think minimum wage is a good idea, or how many climatologists think global warming is real. Finally, while we should never take one scientific study in isolation too seriously (a rule of thumb is that you can usually find one study to support anything), sometimes scientists perform "meta-analysis." The idea is to aggregate all the data together, and see what all papers looking at a particular issue say when taken together. These meta-analyses provide another glimpse into the totality of current evidence.

However, to be frank, science as a whole could certainly do a better job of making the current consensus accessible to the public, and also in expressing consensus in a way that isn't condescending.

That's Science in a Nutshell

This completes our whirlwind tour of how and why science works. So what can we take away? First, when science is brought down to Earth, we can see it's a fairly simple process at heart, although of course there are many wrinkles. But it's not a frightening bogey man to keep at a distance or some impossible-to-understand mystery. We also begin to get glimmers of how some of the tools of science might be able to work for us, personally. For example, if there's something we're worried about, and it's been studied by scientists, we might be able to benefit from their hard work.

Coming back to the anecdote that opened this book, zoologists could tell us that baby bisons can withstand the cold, and that human meddling can lead parents to reject their young. That baby bison need not have died.

The second and more important takeaway is that what helps science seek truth can also help us to unpave hell road. It can help us to trim away our broken beliefs, the ones that cause our good intentions to go astray. The next section talks about practical steps we can take to try on the scientific mindset in our own lives.

Adopting the Scientific Mindset

Science works because it pits skeptical but open-minded experts against each other, which leads to stronger and stronger understanding of the world as it progresses. Science self-corrects with more experiments and evidence, becoming less wrong over time. If that's the superpower we want, what changes in our mindset will help us towards that goal? I'm going to lay out three main takeaways: Embracing uncertainty, reasoning well, and knowing your weaknesses.

Embracing Uncertainty

First, we need to learn how to embrace uncertainty. Science emphasizes at all times that the current theory could well be wrong. We too need to be aware that even though we feel certain about something, we could be mistaken. We've seen that science generally works to get towards the right answers, at least on points of fact. Science as a whole seems to get "righter" over time. But individual people often become deluded, and remain deluded for their entire lives. There are people today who hate Jews or African-Americans, who view them as the source of all their problems. Some of those people sadly seem unreachable, and will die still hating Jews. They are stuck perpetually in a dead end, with no way out, because they are completely confident in their current broken beliefs.

Science never hits a complete dead end, though, because skepticism is its foundation. I'm not saying that science is immune to getting things wrong, which of course happens, and it's gone through its own share of racism. In the 1800s the study of skull shapes (called Phrenology) was popular, and was a tool for justifying European superiority to other races. While it seems silly now to think you could make sweeping statements of someone based only on their skull, it was once a well-respected area of science. Only as evidence from more and more studies accumulated in the 1840s, did it lose steam and become discredited. Phrenology was bogus, and it's shameful that it seemed respectable for as long as it did, but science did eventually escape that dead-end.

This super-power, to assuredly cut closer to truth over time, is one we'd like to import from science into our own lives. What this really requires is the courage to question everything. It means accepting that there is uncertainty, and then being okay with it, which is easier said than done -- because there's a certain satisfaction and comfort in the feeling of 100% confidence, in pure black-and-white thinking.

From time to time, it's worth reacquainting ourselves with the discomfort of uncertainty. One way to do this is to think about what beliefs you have that feel beyond question, and then for a moment, try to question the unquestionable. Do humans have free will? Is abortion evil? Is climate change real? Stop now, pick one thing you think is beyond question, and try for a minute, to see how it feels to take the other side of the argument. Acknowledge that the people who disagree with you probably aren't evil, or complete idiots -- and yet they came to a different conclusion. Can you be curious about why they think the way that they do? This exercise can be uncomfortable, or cause strong emotions, but it's worth trying, because it shows how hard it really is to embrace uncertainty. But if we really care about truth, it's supremely important to avoid committing to dead ends in our thinking. Once we hit a dead end, where we're completely sure of something that's actually wrong, the game's over.

Here's one practical trick to use when you're in an argument with someone else, and you notice a feeling of iron-clad confidence -- you're completely sure that you're right. When that happens, try and think of a piece of evidence, no matter how strange or unlikely, that would cause you to doubt your position. For example, if you're sure that the Earth is round (which by the way, I'm pretty sure of, too) -- what would make you question that belief? For me, if from an airplane someone showed me the actual edge of the "flat world," I'd really reconsider my position.

Maybe it's easy to do this exercise when the belief feels silly, but would you feel comfortable playing this game with something you felt hot about? For example, take the issue of abortion, or raising taxes. Maybe you think taxes are too high, or that the rich should be taxed more, or maybe even that the poor should be taxed more. What kind of evidence would actually have a chance of changing your mind? It's not good if the answer is nothing. What if time and time again, whenever the rich were taxed less, the result was more and better jobs for the poor? Or if taxing the rich more consistently led to a higher GDP? Isn't there some possibility that the world works in either of those ways? If there's no evidence that would ever change your mind, that's really something to think hard about. It's possible you could be in a dead-end.

Reasoning Well

Another important ingredient in the scientific mindset is reasoning. Reasoning is how we arrive at reasonable conclusions. One fact about reality is that some reasons simply are better than other ones. Some lines of reasoning just make more sense. They better obey logic and respect how the world works. It's reasonable to think that after you hear a meow, you might soon see a cat. But it's insane to think that you're an immortal dragon because you had a dream once where a wise old man told you that.

Reasoning well also means that strong beliefs require strong evidence. If you are nearly certain about something, you should have a lot of evidence supporting that something. This draws us into a fundamental consideration, one that's worth a moment of pondering: How do we really know anything about the world? This might seem like a pointless navel-gazing question with no relevance to our life. But actually, it's an important one. Obviously, we don't believe things for nothing. I think the sun will rise tomorrow, because it's risen every day of my life. I take all those past memories as evidence, and I don't think tomorrow should be any different. If I drop a beer bottle from a balcony onto concrete, I think it will break, because I've seen glass bottles shatter before.

If someone told you they had a gut feeling that the sun wasn't going to rise tomorrow, or that the beer bottle would bounce like rubber off the ground, would you believe them? Probably not. It's hard to understand how a gut feeling could tell you anything about the sun, or about gravity. If they bet you $20 that the sun wasn't going to show tomorrow, you'd be silly not to take the bet. A gut feeling doesn't always cache out in evidence. It's not always the best reason, because two people can have two conflicting gut feelings, and there's no real way to understand which one is better. Of course, gut instinct can be dead on -- but we know that it's not always the most reliable way to make factual decisions.

If you really want to convince me of something, you need to give me a good reason. If I caught a terrible infection, and you told me not to take the antibiotics my doctor prescribed, I'd need a damned good reason to take your advice. Maybe you tell me that your psychic had a vision about me, that I should instead eat three pickled crow's feet -- otherwise I'll die. Here we have a battle between different options. If I asked what evidence was behind the doctor's antibiotics, I could find lots of experiments, showing that people with terrible infections often were cured when given antibiotics. If I asked what evidence was behind the psychic's vision, there might be some seeming explanation: Crows never get sick, the psychic says, and by eating their feet, you gain some of their essence. There is some kind of vague logic -- but I could make up a dozen other similar stories. Where's the hard proof?

If we care about getting to the truth in our own lives, we must also care about reasons and evidence.

Know your weaknesses

One critical reason that science works is that it takes its own weaknesses into account. It's designed to avoid dead-ends at all cost, yet is built out of scientists, who can be as overconfident and wrong-headed as the rest of us.

Maybe it's obvious to you, that just because a scientist knows something about their own particular field, like mathematics, or engineering, it doesn't mean they are experts in anything else. But that's far from obvious to many scientists themselves. Science is littered with examples of scientists who do great work at some point in their career, but go a little off the rails when dabbling outside of what they know. There's a phrase called "Nobel disease," to describe how many Nobel prize-winners end up endorsing pseudo-science like mind-reading, cold fusion, or ghosts. For example, Einstein endorsed a psychic, famous physicist William Shockley believed in "scientific racism," chemist Linus Pauling was convinced that megadoses of Vitamin C would cure disease, and even Niko Tinbergen (the bird scientist who earlier helped us think about peer review) had a crank theory of autism that has long since been discredited.

Possibly slowing the progress of science, scientists are often extremely slow to give up their own pet theories. You can understand why this might be, because, if you invested your whole life in studying one theory, it would be very painful to have it be disproved. There's also some truth to the stereotype that scientists can become arrogant as a result of all their education. The feeling that you are an expert can go to one's head, and you can easily forget that expertise in a narrow domain is just that -- narrow expertise. These foibles seem to be a strong part of human nature.

Science works, even though individual scientists can be stubborn and wrong, because it's designed to be cut-throat and resilient. The practice of science is informed by the universality of human weakness and flaw. It's good that some of the scientists reviewing your experiment might be your academic enemies, because they'll look intensely for any excuse to reject your paper, and you'll take that into account when you're doing experiments. You know you can't give them that easy excuse they're looking for. It's good that everyone wants to be the one to disprove a famous theory, to get their name into the history books -- that way everyone is seeking the flaws in the current thinking. We remember Einstein because he overturned Newton, and we might remember your name for all time if you overturn Einstein! It's good that no one is absolutely in charge, that there's no King of Science. Everyone in a scientific field gets to make up their own mind, and make their own contribution to the current consensus (or the lack of one). That way, there's no absolute power to corrupt science absolutely.

If we care personally about getting to the truth, we need to take our own weaknesses into account, just as science does. One weakness is to believe that we already have all the right answers. To combat that, we can actively seek out disagreement. We can seek out people with different opinions than ours, and talk with them, from a place of curiosity. We can watch TV shows we ordinarily wouldn't -- PBS or Fox News -- and try to see their side of the story. That might sound painful, and maybe it will be, but if it gets us closer to the truth, the pain will be worth it -- we're in search of a superpower, after all!

Another weakness is to treat Google like a delivery service for comfortable-truth. If you're in a disagreement, you might do a web search, and scroll until you find the first article supporting what you already believed. Of course, that's a recipe that allows anyone to support any belief. It's a convenient escape hatch. And maybe that's one reason why it's so hard for anyone to agree even on basic facts across the political aisle. The trick is to catch yourself in the comfortable-truth dance. Notice when you're actively discounting the first couple links because they don't support your argument. One place to start, which is easier (and more fun), is to notice when people you disagree with do it. But be sure you graduate to doing the same thing while looking in the mirror.

Overall, we can try to become more self-aware, and begin to notice the subtle flaws in our own thinking. There are many other similar weaknesses (called cognitive biases) that you can read about. But for now, it helps to realize that whatever thinking flaws we notice in our quirky friends or people with opposite political beliefs from us, we should direct that same noticing inwards.

The Key Commitment: Truth over Comfort

Now that we understand what kind of mindset we need to absorb the scientific superpower, let's boil it down further. It may be hard to remember everything about the scientific method, and it may not be clear exactly what we should actually do differently in our lives as a result. In the next section, we're going to talk about one all-important commitment that you can make, and three simple practices you can build into your own life. If you take these four ideas seriously, you'll be well on your way.

Here's the key commitment: "I value truth over comfort." This is the core spirit of science -- question everything and follow the evidence where it leads.

What do we give up, and what do we gain, by making this commitment? The cost is that you might need to burn your cozy beliefs in the name of truth. But you gain a potential superpower in the bargain -- the ability to become more correct over time. But taking this commitment seriously, just like a challenging new year's resolution, is much easier said than done.

For example, if you ask your friends and family, they'd probably say they do value truth over comfort, already. Who really sets out to ignore the truth because they don't think they can handle it? But, you also know that Uncle Randy won't give up his favorite political conspiracy theory, no matter how long you argue with him. He's not really willing to honestly question that cozy delusion. For the last time, I don't think the president is a reptile-man, Randy! To really have the courage to question everything, to bravely follow the evidence, is surprisingly difficult -- our brains just aren't wired that way.

We need to go beyond lip-service to avoid the Uncle Randy trap. We need to acknowledge that even if we already value truth over comfort, we're probably falling short. We're probably not walking the walk, because this kind of walking takes practice. What follows are three specific practices that you can take into your own life. They're tricks to help you achieve what you already aim to: To land at the truth.

Practice 1: Make predictions

This first practice will keep appearing and re-appearing throughout the rest of the book. It's a way of making our beliefs pay rent. You're deciding you won't let them sit there untested. They've got to do work to stick around.

The practice begins by asking a question of one of your prized beliefs, one that you're pretty confident is true, but keeps coming under fire from others. You ask: If this belief is true, what in the future will come true? You write down what that prediction is, and then, you wait. In the future, you note if the prediction came true or not. If the prediction comes true, there's reason to be more confident in your belief. If the prediction fails, then there's reason to be less confident in your belief. You're doing a small scientific experiment -- collecting data and seeing if your theory holds true. Let's walk through a concrete example to see how this might work in practice.

Let's say I suspect that my coworker Jim hates me. I think I see him giving me the stink eye when we pass in the halls. And there was that one time he attacked my idea in the meeting. I might ask, "If Jim really hates me, what will happen in the future?" What's nice about this question is that it requires me to get very specific, and to think about what in a person's behavior might be a hint of their hatred. I might predict that Jim won't say happy birthday to me in a few weeks, or come to the happy hour after work that day. I write that down. And then, I see what happens. Now, even if he doesn't say happy birthday to me, it's not a definite sign that he hates me, but at least I tested my belief in some minor way. I have more evidence. Or, maybe I'll be surprised and he'll buy me a drink at the happy hour. Perhaps he doesn't hate me after all.

Over time, if you keep making predictions, you might be surprised with what you discover.

Practice 2: Cherish Surprise

The second practice is to cherish moments of surprise. Take a second, and think: Has there been anything that's surprised you today, or this week? Try to write down when you notice you've been surprised -- what surprised you, and why it surprised you. You feel surprise when something goes differently than expected, which is valuable information -- it's like brain-gold. It's telling you that some belief you hold probably needs adjustment. When we feel surprise, we might be tempted to brush it aside, or just to move on with our busy day. Whether it's a pleasant surprise (things went better than I expected!) or a sour one (how did I miss my bus?), there's something we can learn. And either way, we should sit with surprise, and ask ourselves, "What is surprising right now? And what belief about the world does this challenge?"

In science, cherishing surprise has lead to breakthrough discoveries. For example, antibiotics were discovered when a Petri dish containing bacteria was contaminated by a mold. Ordinarily this might simply be frustrating -- an experiment has gone wrong. But Alexander Fleming noticed something surprising, there was a circle around the mold where no bacteria could be found. Even though it wasn't what he was trying to study, he noticed a disagreement between what he saw, and his belief that the mold and the bacteria would grow over each other. In cherishing his surprise, he realized that the mold was creating some kind of anti-bacterial compound -- a realization that changed medicine forever! And there are many more examples from science of just this kind of serendipitous discovery.

Tied up in the idea of cherishing surprise is that of developing curiosity. Curiosity is the desire to know or learn about something. You'll have more success learning about how the world works, if you have a genuine sense of curiosity about it. And if we really cherish surprise, we should try to generate it! To generate surprise, we should try new things and make small experiments. This is similar to the idea in science of finding experiments that will tease apart two competing theories. No matter how the experiment goes, we may be able to falsify one of the theories. The best experiments are the ones in which no matter how the results come out, we'll find ourselves surprised.

Interestingly, surprise is a hint that we might be wrong in some way. And if we cherish surprise, so too should we celebrate wrongness.

Practice 3: Celebrate Wrongness

The third practice is the most difficult one, but it's critical. Have you ever been arguing with someone, and feel yourself getting more and more upset? Maybe the argument ends with a few angry words. And then, the next day, you realize that you were angry because the person you were arguing with was making a lot of sense, and that you were probably wrong, but...you hate being wrong. We often feel bad when we realize that we're wrong. So, sometimes we just plow ahead stubbornly, never admitting we were wrong, just to avoid that bad feeling.

This happens a lot in relationships. I can vividly recall waking up after a fight with my then-girlfriend, and dreading calling her to confess that I realized just how badly I had been wrong -- it wasn't saying sorry that I was resisting, but that I'd be eating humble pie after being so sure I was right the night before. Sometimes you just want so badly to win the argument that you forget to care about getting to the real truth of the matter.

Even though it doesn't feel like it, it's beautiful when we discover we're wrong. We should celebrate wrongness, because that means we can remove a broken belief, it means that we're on the road to truth. We're cutting through the weeds! Whenever you realize that you were wrong, stop and appreciate the moment. Try to smile, and think seriously about changing your mind. If possible, write down what you were wrong about, or make a note of it in your cellphone. You might notice I'm often suggesting to write things down. The reason is that writing forces us to organize our thoughts, and it leaves a permanent record that you can revisit. You could begin to see patterns: What are you often mistaken about, and how do you most often discover you're wrong? The habit of recording valuable data is something we should also steal from the methods of science.

What does celebrating wrongness actually look like in practice? Imagine a married couple who disagree about what route is faster to the grocery store. Tom is driving and thinks that the highway will be faster, but Jane is convinced that there'll be a traffic jam this time of day, and that the backroads will be faster. Tom decides to take the highway, and sure enough, one mile down the road, they grind to a halt. Tom can feel his pulse rising as Jane says -- "Well, it looks like I was right." All his being is telling him to dig in ("Yeah, but that's just today -- usually this time of day the highway is clear."), but he catches himself, and laughs instead. "This wasn't the fastest way today -- you were right. I wonder if I was wrong -- is traffic this time of day always this bad?" Jane was ready for a fight, but she relaxes too, "Well, I don't think it's always quite this bad, but it's been getting worse on Friday evenings ever since the big movie theater opened." "Oh, that's right -- I forgot about the movie theater!"

The more we celebrate wrongness, the more we can begin to treat arguments in a new way. Rather than looking at a disagreement as a battle to be right, it can be a collaboration to uncover the truth, whatever that might be. And if we really value truth over comfort, this is simply a better way to view a disagreement.

The Path Ahead

This essay covered a lot of ground. We started by looking at science, and how it has made our lives better. We saw how it tends to get righter over time: it self-corrects. We tried to get a deeper understanding of what allows science to avoid dead ends, and how we can change our mindset to absorb its superpower. We ended by making the commitment to value truth over comfort, and suggesting three practices that can help us to meet that commitment.

Let's be clear: thinking clearly isn't easy, and it's not something we learn overnight. I'm still working at it, and fail often. It takes practice, and reading this essay isn't enough. You can't learn how to ride a bike by reading, either.

But there's much to be gained from actual practice. Clear thinking can benefit every part of our life, from personal happiness to financial success. The superpower of science has much to offer us, if we take it seriously. One powerful gift is the ability to see through pretenders who hope to manipulate us. There are so many experts claiming to have all the answers for us, but one expert's answers often conflicts with another's. How can we hope to make sense of all the information that's constantly being beamed at us? In whom should we invest our trust? That's the subject of the next chapter.

This entry is posted in Essays.