Overconfidence is the death of everything in investing. I suffer from the problem just like everyone else because last time I checked…I’m human.
As humans, we need to face the reality of overconfidence. Overconfidence is like Michael Jordan — you can’t stop it, you can only hope to contain it.
One way to minimize the chance we believe in our own bull feces is to continually question our core beliefs, read the latest research, invert, collect out-of-sample data, seek non-conforming opinions, etc.
Over time, we’ve developed 4 core beliefs that drive everything we do as a firm–we’ll focus on #1 for this post:
- We believe in Systematic Decision-Making, not ad-hoc decision-making. Disciplined and repeatable processes are more reliable than discretionary judgment.
- We believe in Evidence-Based Investing, not story based-investing. Rigorous, data-driven research drives success; stories drive sales.
- We believe in Transparency, not black-boxes. We are committed to having investors understand what we are doing.
- We believe in Win-Wins, not unsustainable relationships. We are committed to a business model that prioritizes client success.
Belief #1 is driven by a theory that disciplined quantitative processes beat human judgement, on average. The core assumption behind this theory is that disciplined processes tend to create a nice trade-off between estimation bias and estimation variance, whereas human judgement has a less favorable balance between bias and variance. A great discussion on this topic is here. Here is a visual from Scott’s website that explains the concept nicely:
We have a long post dedicated to explaining why we believe systematic processes might be more effective than human-based “instinctual” decision-making. This belief is based on the evidence from psychology, which seems to be clear that mechanical prediction devices are more effective than human-based prediction. One of the key pieces of research on the subject is a study by Will Grove, David Zald, Boyd Lebow, Beth Snitz, and Chad Nelson, called, “Clinical Versus Mechanical Prediction: A Meta-Analysis.” We reference this paper in posts here, here, and many other places.
Because the result is so counter-intuitive — how can a stupid machine be better than a human? — I am always worried that maybe we’re missing something. I searched far and wide for serious research and studies that question the core insight from the Grove et al. meta-analysis. No luck.
So perhaps Paul Meehl, the eminent scholar in the field, has it right:
There is no controversy in social science that shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one [models outperform experts].
Or perhaps we are looking at stale data? After all, the meta-analysis conducted by Grove et al. was published over 15 years ago and many of the studies included were over 20 years old. To investigate, I decided to email Will Grove, the author on the original study, to ask him if there have been any “new findings” in this research vein. Of course, I fully expected the professor to blow me off, but he was surprisingly willing to have a conversation on the topic.
Our conversation is below:
|from:||Wesley Gray <[email protected]>|
|to:||William M. Grove <[email protected]>|
|date:||Wed, Jan 13, 2016 at 9:12 PM|
|subject:||Clinical vs. mechanical prediction|
I’m writing you because the results you find in your meta-analysis are so counter-intuitive on so many levels. I am an evidence-based person so I believe in your conclusions, but I was just curious if there have been extensions or serious critiques from others in your field? I want to read the “anti-Grove” research so I can really understand the debate (and as Meehl said, maybe there isn’t a debate).Thanks,Wes
–Professor Grove response
I’m glad you found the meta-analysis useful. I am aware of no later-published reviews, whether quantitative (meta-analysis) or qualitative (traditional narrative research review). Our findings are not counter-intuitive to any serious student of the human judgment literature, which shows human (presumably including clinical) judgment to be beset with a number of serious biases, as well as apparent reliance on what are called “heuristics”—rules of thumb to guide judgment by, but which in fact serve to lessen the accuracy of human judgments. My papers on this subject have never elicited, as far as I can recall, a single published disagreement or criticism—among clinical psychologists, the vast majority of PhDs accept it as well established that the model beats the judge, nearly every time. That so few clinicians use actuarial models in practice is largely due to the failure of researchers to supply them with models for their most important outcomes, in easy-to-use formats.Regards,Will Grove