A case study of graduate admissions: Application of three principles of human decision making
- Dawes, R. M.
- American Psychologist, 26, 180-188
- An online version of the paper can be found here
- Want a summary of academic papers with alpha? Check out our free Academic Alpha Database!
The problem is that the admissions committee does not know what they are (except perhaps on a vague verbal level). And it has no way of assessing them. Since the clinical judgment of the admissions committee is not even as good as two of the conventional variables considered singly, it can only be concluded that the attempt of the admissions committee to assess these other presumably important variables decreases rather than increases the validity of its judgments. What is needed is research concerning the determinants of graduate success.
This paper involves a fair amount of literature review and discussion on simple models versus experts.
A fascinating quote:
How can a model (linear or any other sort) based on an individual’s behavior do a better job of what the individual is trying to do than does the individual himself? The answer is that a mathematical model, by its very nature, is an abstraction of the process it models; hence, if the decision maker’s behavior involves following valid principles but following them poorly, these valid principles will be abstracted by the model—as long as the deviations from these principles are not systematically related to the variables the decision maker is considering.
For example, a decision maker may be weighting aptitude, past performance, and motivation correctly in predicting performance in graduate school and beyond, yet he may be influenced by such things as fatigue, headaches, boredom, and so on; in addition, he will be influenced by whether the most recent applications he has seen are particularly strong or weak.
Here is how the tests go down:
- Identify admission rankings for prospective PhD students based on their personal assessment of a variety of characteristics (GPA, GRE, transcript, recommendations, etc). All of this is done from 1964-1967.
- Let a computer pipe in GPA, Undergraduate Institution Quality, and GRE score.
- Collect performance ratings on students in 1969. The faculty rank students based on their realized performance in graduate school on a 5 point scale.
- Compare the performance of the admission committee rankings and the performance of the computer prediction.
Here are the results:
- The average rating of the admissions committee is only 19% correlated with outcome.
- Simply using GPA alone does a better job than the admissions committee (21%).
- A simple multiple regression of the grades, GRE, and insitution quality has a 40% correlation.
==> A simple linear combination of the variables identifed in (2) above outperform the admission committee rankings.
Next the author uses multiple regression to “quantify” how the admissions committee makes their decisions. He then uses this information to predict future performance (they call this paramorphic representation…in other words, a computer model to predict how experts will act, based on the data on their decision making)
Paradoxically, after-the-fact performance is 25% correlated with the computer prediction of the experts behavior, whereas future performance was only 19% correlated with the experts actual decisions.
==>a computer predicting how the experts will act based on their historical actions, does a better job predicting than the experts themselves.
Chew on that one for a while…
Thoughts on the paper?
- The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. Our full disclosures are available here. Definitions of common statistics used in our analysis are available here (towards the bottom).
- Join thousands of other readers and subscribe to our blog.
- This site provides NO information on our value ETFs or our momentum ETFs. Please refer to this site.