By |Published On: April 21st, 2016|Categories: Behavioral Finance|

Overconfidence is the death of everything in investing. I suffer from the problem just like everyone else because last time I checked…I’m human.

As humans, we need to face the reality of overconfidence. Overconfidence is like Michael Jordan — you can’t stop it, you can only hope to contain it.

One way to minimize the chance we believe in our own bull feces is to continually question our core beliefs, read the latest research, invert, collect out-of-sample data, seek non-conforming opinions, etc.

Over time, we’ve developed 4 core beliefs that drive everything we do as a firm–we’ll focus on #1 for this post:

  1. We believe in Systematic Decision-Making, not ad-hoc decision-making. Disciplined and repeatable processes are more reliable than discretionary judgment.
  2. We believe in Evidence-Based Investing, not story based-investing. Rigorous, data-driven research drives success; stories drive sales.
  3. We believe in Transparency, not black-boxes. We are committed to having investors understand what we are doing.
  4. We believe in Win-Wins, not unsustainable relationships. We are committed to a business model that prioritizes client success.

Belief #1 is driven by a theory that disciplined quantitative processes beat human judgement, on average. The core assumption behind this theory is that disciplined processes tend to create a nice trade-off between  estimation bias and estimation variance, whereas human judgement has a less favorable balance between bias and variance. A great discussion on this topic is here. Here is a visual from Scott’s website that explains the concept nicely:

Source: http://scott.fortmann-roe.com/docs/BiasVariance.html

Source: http://scott.fortmann-roe.com/docs/BiasVariance.html

We have a long post dedicated to explaining why we believe systematic processes might be more effective than human-based “instinctual” decision-making. This belief is based on the evidence from psychology, which seems to be clear that mechanical prediction devices are more effective than human-based prediction. One of the key pieces of research on the subject is a study by Will Grove, David Zald, Boyd Lebow, Beth Snitz, and Chad Nelson, called, “Clinical Versus Mechanical Prediction: A Meta-Analysis.” We reference this paper in posts here, here, and many other places.

Because the result is so counter-intuitive — how can a stupid machine be better than a human? — I am always worried that maybe we’re missing something. I searched far and wide for serious research and studies that question the core insight from the Grove et al. meta-analysis. No luck.
So perhaps Paul Meehl, the eminent scholar in the field, has it right:

There is no controversy in social science that shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one [models outperform experts].

Or perhaps we are looking at stale data? After all, the meta-analysis conducted by Grove et al. was published over 15 years ago and many of the studies included were over 20 years old. To investigate, I decided to email Will Grove, the author on the original study, to ask him if there have been any “new findings” in this research vein. Of course, I fully expected the professor to blow me off, but he was surprisingly willing to have a conversation on the topic.

Our conversation is below:

–My inquiry

from: Wesley Gray <abc@xyz.com>
to: William M. Grove <abc@xyz.com>
date: Wed, Jan 13, 2016 at 9:12 PM
subject: Clinical vs. mechanical prediction
[personal introduction and random conversation]…
I’m writing you because the results you find in your meta-analysis are so counter-intuitive on so many levels. I am an evidence-based person so I believe in your conclusions, but I was just curious if there have been extensions or ​serious critiques from others in your field? I want to read the “anti-Grove” research so I can really understand the debate (and as Meehl said, maybe there isn’t a debate).
Thanks,
Wes

–Professor Grove response

from: William M. Grove <abc@xyz.com>
to: Wesley Gray <abc@xyz.com>
date: Thu, Mar 24, 2016 at 3:14 PM
subject: Re: Clinical vs. mechanical prediction
[personal introduction and random conversation]…

I’m glad you found the meta-analysis useful.  I am aware of no later-published reviews, whether quantitative (meta-analysis) or qualitative (traditional narrative research review).  Our findings are not counter-intuitive to any serious student of the human judgment literature, which shows human (presumably including clinical) judgment to be beset with a number of serious biases, as well as apparent reliance on what are called “heuristics”—rules of thumb to guide judgment by, but which in fact serve to lessen the accuracy of human judgments.  My papers on this subject have never elicited, as far as I can recall, a single published disagreement or criticism—among clinical psychologists, the vast majority of PhDs accept it as well established that the model beats the judge, nearly every time.  That so few clinicians use actuarial models in practice is largely due to the failure of researchers to supply them with models for their most important outcomes, in easy-to-use formats.
Regards,
Will Grove
There you have it. Computers still beat experts.
My confirmation bias instinct is happy to report that our core assumption that systems beat humans, on average, is still intact. Overconfidence is alive and well :-)

About the Author: Wesley Gray, PhD

Wesley Gray, PhD
After serving as a Captain in the United States Marine Corps, Dr. Gray earned an MBA and a PhD in finance from the University of Chicago where he studied under Nobel Prize Winner Eugene Fama. Next, Wes took an academic job in his wife’s hometown of Philadelphia and worked as a finance professor at Drexel University. Dr. Gray’s interest in bridging the research gap between academia and industry led him to found Alpha Architect, an asset management firm dedicated to an impact mission of empowering investors through education. He is a contributor to multiple industry publications and regularly speaks to professional investor groups across the country. Wes has published multiple academic papers and four books, including Embedded (Naval Institute Press, 2009), Quantitative Value (Wiley, 2012), DIY Financial Advisor (Wiley, 2015), and Quantitative Momentum (Wiley, 2016). Dr. Gray currently resides in Palmas Del Mar Puerto Rico with his wife and three children. He recently finished the Leadville 100 ultramarathon race and promises to make better life decisions in the future.

Important Disclosures

For informational and educational purposes only and should not be construed as specific investment, accounting, legal, or tax advice. Certain information is deemed to be reliable, but its accuracy and completeness cannot be guaranteed. Third party information may become outdated or otherwise superseded without notice.  Neither the Securities and Exchange Commission (SEC) nor any other federal or state agency has approved, determined the accuracy, or confirmed the adequacy of this article.

The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. Our full disclosures are available here. Definitions of common statistics used in our analysis are available here (towards the bottom).

Join thousands of other readers and subscribe to our blog.