What Is Survivorship Bias? | Definition & Examples
Survivorship bias occurs when researchers focus on individuals, groups, or cases that have passed some sort of selection process while ignoring those who did not. Survivorship bias can lead researchers to form incorrect conclusions due to only studying a subset of the population. Survivorship bias is a type of selection bias.
What is survivorship bias?
Survivorship bias is a form of selection bias. It occurs when a dataset only considers existing (or ‘surviving’) observations and fails to consider observations that have ceased to exist.
For example, when investigating the profitability of the tech industry, one has to also study businesses that went bankrupt, rather than only focusing on businesses currently in the market.
Focusing on a subset of your sample that has already passed some sort of selection process increases your chances of drawing incorrect conclusions. ‘Surviving’ observations may have survived exactly because they are more resilient to difficult conditions, while others have ceased to exist as a result of those same conditions.
Why survivorship bias matters
When a study is affected by survivorship bias, we only pay attention to part of the data. This can have a number of consequences, such as:
- Overly optimistic conclusions that do not represent reality, leading us to think that circumstances are easier or more likely to work in our favor than they actually are. Studying only successful entrepreneurs or startups can have this effect.
- Misinterpretation of correlation, or seeing a cause-and-effect relationship where there isn’t one. High school or college dropouts, for example, become successful entrepreneurs despite leaving school, not because of it.
- Incomplete decision-making, missing out on important information about those who didn’t ‘survive’, such as businesses that failed despite seemingly fertile growth environments or hard-working founders.
Awareness of survivorship bias is important because it impacts our perception, judgment, and the quality of our conclusions.
Survivorship bias examples
Survivorship bias can cloud our judgment not only in research, but in everyday life, too.
Relatedly, before drawing any conclusions, you need to ask yourself whether your dataset is truly complete. Otherwise, you are also at risk of survivorship bias.
How to prevent survivorship bias
Survivorship bias is a common logical error, but you can take several steps to avoid it:
- Consider what’s missing from your data, asking yourself: Is your dataset complete? What observations didn’t ‘survive’ from an event or selection procedure? Think of defunct businesses, school dropouts who didn’t become billionaires, or clinical trial participants who showed no improvement.
- Pick data sources crafted to ensure accuracy. Be careful not to omit observations that no longer exist and would change your analysis or conclusions. For example, also look for academic journals publishing negative study results.
- When cleaning your data, be mindful of outlier removal. You may accidentally remove critical information if you don’t understand what the outliers mean. It’s crucial to determine whether any extreme values are truly errors or instead represent something else. Experts on the subject matter of your research can help you with this, as they know the field best.
Being aware of survivorship bias, as well as being open and transparent about your assumptions, is the best strategy to prevent it.
Other types of research bias
Frequently asked questions about survivorship bias
- What are common types of selection bias?
-
Common types of selection bias are:
- Sampling or ascertainment bias
- Attrition bias
- Volunteer or self-selection bias
- Survivorship bias
- Non-response bias
- Undercoverage bias
- Why is bias in research a problem?
-
Bias affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where new forms of treatment are being evaluated.
- What’s the difference between reliability and validity?
-
Reliability and validity are both about how well a method measures something:
- Reliability refers to the consistency of a measure (whether the results can be reproduced under the same conditions).
- Validity refers to the accuracy of a measure (whether the results really do represent what they are supposed to measure).
If you are doing experimental research, you also have to consider the internal and external validity of your experiment.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.