What Is Predictive Validity? | Examples & Definition
Predictive validity refers to the ability of a test or other measurement to predict a future outcome. Here, an outcome can be a behaviour, performance, or even disease that occurs at some point in the future.
Predictive validity is a subtype of criterion validity. It is often used in education, psychology, and employee selection.
What is predictive validity?
Predictive validity is demonstrated when a test can predict a future outcome. To establish this type of validity, the test must correlate with a variable that can only be assessed at some point in the future – i.e., after the test has been administered.
To assess predictive validity, researchers examine how the results of a test predict future performance. For example, SAT scores are considered predictive of student retention in the US: students with higher SAT scores are more likely to return for their second year of college. Here, you can see that the outcome is, by design, assessed at a point in the future.
Predictive validity example
A test score has predictive validity when it can predict an individual’s performance in a narrowly defined context, such as work, school, or a medical context.
Tests aimed at screening job candidates, prospective students, or individuals at risk of a specific health issue often are designed with predictive validity in mind.
Predictive vs concurrent validity
Predictive and concurrent validity are both subtypes of criterion validity. They both refer to validation strategies in which the predictive ability of a test is evaluated by comparing it against a certain criterion or ‘gold standard’. Here,the criterion is a well-established measurement method that accurately measures the construct being studied.
The main difference between predictive validity and concurrent validity is the time at which the two measures are administered.
- In predictive validity, the criterion variables are measured after the scores of the test.
- In concurrent validity, the scores of a test and the criterion variables are obtained at the same time.
How to measure predictive validity
Predictive validity is measured by comparing a test’s score against the score of an accepted instrument – i.e., the criterion or ‘gold standard’.
The measure to be validated should be correlated with the criterion variable. Correlation between the scores of the test and the criterion variable is calculated using a correlation coefficient, such as Pearson’s r. A correlation coefficient expresses the strength of the relationship between two variables in a single value between −1 and +1.
Correlation coefficient values can be interpreted as follows:
- r = 1: There is perfect positive correlation.
- r = 0: There is no correlation at all.
- r = −1: There is perfect negative correlation.
You can automatically calculate Pearson’s r in Excel, R, SPSS, or other statistical software.
A strong positive correlation provides evidence of predictive validity. In other words, it indicates that a test can correctly predict what you hypothesise it should. However, the presence of a correlation doesn’t mean causation.
The higher the correlation between a test and the criterion, the higher the predictive validity of the test. No correlation or a negative correlation indicates that the test has poor predictive validity.
Frequently asked questions about predictive validity
- What are the two types of criterion validity?
-
Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.
Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:
- Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time
- Predictive validity is a validation strategy where the criterion variables are measured after the scores of the test
- What’s the difference between reliability and validity?
-
Reliability and validity are both about how well a method measures something:
- Reliability refers to the consistency of a measure (whether the results can be reproduced under the same conditions).
- Validity refers to the accuracy of a measure (whether the results really do represent what they are supposed to measure).
If you are doing experimental research, you also have to consider the internal and external validity of your experiment.
- What are the main types of validity?
-
Validity tells you how accurately a method measures what it was designed to measure. There are 4 main types of validity:
- Construct validity: Does the test measure the construct it was designed to measure?
- Face validity: Does the test appear to be suitable for its objectives?
- Content validity: Does the test cover all relevant parts of the construct it aims to measure.
- Criterion validity: Do the results accurately measure the concrete outcome they are designed to measure?
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.