Empirical Methods

Regression is a tool that can turn you into a fool

By |July 27th, 2023|Empirical Methods, Factor Investing, Research Insights, Value Investing Research|

Running regressions on past returns is a great tool for academic researchers who understand this approach's nuance, assumptions, pitfalls, and limitations. However, when factor regressions become part of a sales effort and/or are put in the hands of investors/advisors/DIYers, "the tool can quickly turn you into a fool."

Submergence: A Tool to Assess Drawdowns and Recoveries

By |May 22nd, 2023|Empirical Methods, Research Insights, Basilico and Johnsen, Academic Research Insight|

According to research by the authors, stocks and bonds have been submerged for about 75% of the time since 1980; and treasuries have been submerged 80% of the time. Submergences are therefore both commonplace and significant, which means that handling them is very important for investors and their investing strategies.

Does diversification always benefit investors? No.

By |February 22nd, 2022|Empirical Methods, Research Insights, Basilico and Johnsen, Academic Research Insight, Tactical Asset Allocation Research|

This article examines the extent to which these assumptions hold and the extent to which investors should want them to hold.  The authors deliver a clever quote from Mark Twain (or maybe it was Robert Frost) that nails the issue in simple terms: “Diversification behaves like the banker who lends you his umbrella when the sun is shining but wants it back the minute it begins to rain”. Nicely expressed!

You Thought P-Hacking was Bad? Let’s talk about “Non-Standard Errors”

By |December 3rd, 2021|Empirical Methods, Research Insights, Academic Research Insight|

Most readers are familiar with p-hacking and the so-called replication crisis in financial research (see here, here, and here for differing views). Some claim that these research challenges are driven by a desire to find 'positive' results in the data because these results get published, whereas negative results do not get published (the evidence backs these claims).

But this research project identifies and quantifies another potential issue with research -- the researchers themselves! This "noise" created by differences in empirical techniques, programming language, data pre-processing, and so forth are deemed "non-standard-errors," which may contribute even more uncertainty in our quest to determine intellectual truth. Yikes!

Go to Top