Got bias?

The short answer is yes, you do. Your mission, should you choose to be a great clinician or scientist, is to recognize your biases and not allow them to color your daily decisions or affect your ability to assess the evidence.

Bias is, to put it simply, a preference for something over another. There is a more complicated definition but that’s the gist. As humans, there is evidence that we are all biased. If you prefer the color blue over red or are a dog person rather than a cat person, you are biased. Likewise are our thought processes and decision making. Everything that happens in our lives colors our world view, and within that world view lie our biases. Therefore, cognitive biases are part of life, everyone has them. They color our personal interactions and impressions of other people. They also affect our ability to accept new concepts and change our mind on topics when new data is presented.

Bias is an important consideration in medical and basic science research because of its subconscious impact on rejection or acceptance of new concepts or theories. There are many examples of times that scientists had a belief that an experiment would turn out a certain way, and lo and behold, it did so. This is sometimes because they were right. We hope so! However, there is always the possibility that subconscious action on the part of the scientist influenced the results of their experiments.

In one of my favorite fictional book series is a gem of truth which has stuck with me since I first read it many years ago. Terry Goodkind said in his book Wizard’s first Rule: “People can be made to believe any lie because they want to believe it is true, or because they are afraid that it is true.” At the time, I thought this an exaggeration. Then I began to pay attention. There is substantial evidence in the events of the past decades that if you play to people’s preferences or fears some will believe almost anything, regardless of the lack of or weakness of the evidence. This concept seems especially true of the general public when it comes to medical research. One published paper covered by the New York Times can and has sent people into a panic and changed the behavior of millions, potentially endangering society. On the other hand, this response could be to the benefit of society, it all depends on the quality and accuracy of the research that lead to said panic.

There are many ways bias can show up in your experimental design and data analysis which make for poor quality research. Due to an emerging understanding of the large impact bias can have on the quality of the data produced, researchers are working to eliminate the opportunity for bias as much as possible in medical and clinical research. Randomization, blinding, and dual analysis of subjective outcomes are some ways in which bias can be controlled.

Bias is not only present in the performance of the research, but also in our digestion of the literature. We all have theories about the way things are, and would prefer to believe our own theories over those of others. This leads us to a situation where we will view evidence for a theory we dislike through a different lens than evidence for a theory we hold in high esteem. For this reason, in the systematic review process, before looking at the results of a study or group of studies on a topic of interest, researchers first perform an assessment of risk of bias in the experimental design, data collection and analysis of the relevant studies. This allows a measure of whether a researcher could have influenced their results. Because of the subjective nature of this assessment, it is often performed by a group of people or two people independently. Any disagreements are resolved in a way that is outlined in a protocol written before the analysis begins. This protocol is incredibly detailed, and should discuss methods for the database search including keywords, methods for choosing relevant studies which included a sufficient number of human subjects, studies that included the outcome of interest as a primary outcome, etc. Publication of these protocols allows for researchers reading the end product, the systematic review, to be sure that the authors were thorough and discriminating in their review process and allows for understanding of why certain studies were not included in the final analysis. Any studies that don’t pass muster are eliminated in a systematic way. These methods work together to ideally create the strongest possible evidence base for that topic. Systematic reviews are currently the pinnacle of medical evidence, and the assessment of the quality of the evidence is often more relevant than the data summarized within.

To further explain the importance of the criteria assessed in a systematic review, let’s take an example of a hypothetical clinical research study performed in the ER. Credit where due, I didn’t make up this example but heard a version of it in the Evidence Based Medicine course offered through the Human Investigations Program. Shameless plug to follow: this is a great course that I think should be required of all clinicians and scientists.

For this hypothetical study, a specialist has to be called in if the person presenting with a condition or event is to be enrolled. Rather than call the specialist in at night, study personnel choose to skip the potential subjects that come in at night. This selects for the subjects who come to the ER with said event or condition only in the day time. We can’t eliminate the possibility that there is something different about the people who come in at night vs. come in during the day with said condition, because we didn’t collect information about the people who come in at night. This introduces an unmeasurable potential bias to the study. Unmeasurable is the kicker here; it means there is no way to mathematically control for this confounder, and the resulting data is suspect. There is no way to extrapolate the data to everyone with said condition or event because there may be differences within the population that weren’t captured within the study. This detail may not seem all that important by itself but because this is only one of the many ways an investigator or study personnel can bias the results, the confounders that weren’t measured may compound in an unknown manner. If for example, study personnel who measured outcomes weren’t blinded, subjects were selected based upon convenience, and the randomization wasn’t random, now there are three unmeasurable confounders which bring the results into question. The researchers who performed this study may choose not to mention in their methods how they selected subjects for their study, or may tell you directly that they eliminated people who came in at night. The latter is preferable of course, but both indicate bias in the selection of subjects.

The above example is extreme and a study wouldn’t be performed in this manner today. In the past however, researchers weren’t aware of the impact of potential confounders. When assessing literature that is from decades past, if you can get the full text of the article at all, you will find that rarely are concerns like selection of subjects addressed in the methods of the publication. Today, it is standard to include these details in the methods for any clinical study, improving the quality of the results therein.

In an evidence-based medicine approach, always assess the quality of the results first, then ask what do the results say and what does it mean.

Ultimately, if the quality of the reporting is poor it is impossible to know the quality of the data presented, and such data should be taken with a grain of salt. Personally, I think there are at least a few public health initiatives that began in decades past and continue today that should be revisited for this reason; the evidence for said initiative is incredibly poor.

Another example of a type of bias that is not often considered is publication bias. Guess what, editors and reviewers for journals are people. This means they have inherent biases for or against certain scientific concepts, as well as personal biases that may lead them to review papers differently based upon who wrote the paper. If your collaborator wrote a paper, would you be more or less likely to skim rather than read the whole thing? Would you be more or less likely to believe their interpretation of their data? We are also all aware that some studies are simply more fundable than others because of the nature of the hypothesis. Essentially, if you aim to test a hypothesis that is accepted by the scientific community, you are more likely to get your study funded than someone who aims to test a contradictory hypothesis. These are all important considerations, and in my opinion the holes in the literature are often as telling as the rich areas.

If you’re interested in your social biases there is a fun research study, Project Implicit, going on at Harvard. They have created an online tool to measure a few different biases. I took the racial bias test twice because I didn’t believe them, but the results were the same the second time; this white girl is biased against white people!  I can’t really explain this other than to say there have been a few white people I didn’t care much for in my past, and not so many of other races. Regardless of the reason, I found it interesting and relevant to my daily life. Better understanding of your own mind is never bad.

Controlling for bias in every way possible makes for better quality research, and ultimately better decision making. We can all benefit from remembering that everyone has them, and from recognizing our own biases in all aspects of our lives.