Applied Quantitative Value (Part 4 of 4)

/Applied Quantitative Value (Part 4 of 4)

Applied Quantitative Value (Part 4 of 4)

By | 2017-08-18T17:10:31+00:00 September 27th, 2012|Research Insights, Value Investing Research|14 Comments

I sat down with Wes Gray–who also contributes frequently on this blog under the title Wesley Gray, Ph.D.–and Toby Carlisle–who runs the great blog over at Wes and Toby (WT) have a book coming out that is dedicated to finding the holy grail of systematic value investing, or Quantitative Value (QV). I’ve been following their coat tails in my 4-part series, which is meant to capture the high-level elements of what will be published in their new book.

In this post, I’ve summed up some results Wes sent my way, did a little analysis using the Turnkey Analyst screening tool, and shared some of the results from my interview below.

Quantitative Value (QV)

After several intermediate stops, we have arrived at our final destination: the fourth and final installment of our 4-part series on optimizing a systematic value investing approach to identifying low risk, high quality, undervalued stocks that generate market beating returns.  In this concluding post, we will take our readers through WT’s entire investing process, from soup-to-nuts, first by defining our screening universe, and then by applying the elements of WT’s Quantitative Value strategy.

The strategy has been sketched in the first 3 parts of our 4-part series:

Below is a visual overview of our Quantitative Value investment process:


Defining the Universe

The first step in the QV investing process involves setting some broad parameters that will form the boundaries of a universe for further screening, and some definitions around the QV methodologies and process.  There are several reasons WT place some limits around the stocks they will consider.  A critical aspect involves liquidity, which is related to the size of the stocks under consideration.  In general, including stocks that are too small, with their wide bid/ask spreads and limited liquidity which can lead to large moves with small volumes, can lead to significantly overstated returns.  In other words, if WT include small stocks in their universe, the backtested results may generate phenomenal returns, but these returns may be unobtainable in the real world, even when operating with a small amounts of capital.  In order to honestly assess the QV approach, WT eliminates all stocks below the 40th percentile breakpoint of the NYSE by market capitalization and WT market-weight (also referred to as “value-weight” in academic literature) portfolios. Market-weighted portfolios are constructed similar to the S&P 500 index, which weights each firm within the index according to its market value.  As of December 31, 2011, the 40% NYSE market cap breakpoint corresponded to a market capitalization of approximately $1.4 billion.  WT also exclude from the data set specific securities, including ADRs, REITS, ETFs and others, as well as industries such as utilities and financials, which present various problems for the QV approach.  Another requirement is that the firms WT analyze must have an adequate number of years of data to draw from, as some of the QV metrics require that WT analyze financial data over the past 8 years.

Another factor WTconsider in the analysis is transaction costs.  In order to accurately account for the potential effect of transactions costs on the real-world implementation of our strategy, WT minimize them by rebalancing only once per year, and by investing in only large, liquid stocks.  WT also attempt to avoid other shortcomings of backtesting: WT use the CRSP database, which avoids survivorship bias since it includes historical corporate action and delisting information, and WT lag the data by 6 months in order to avoid look-ahead bias.

We ran the quantitative model the other day and, utilizing the 40th percentile breakpoint of the NYSE, we established our initial universe consisting of 829 stocks.  Next, we “clean” this universe of investable stocks, by eliminating those that pose a risk of permanent capital impairment, as discussed in Part 1 of our series.  Accordingly, we apply WT’s three basic tools, in order to avoid the risk of financial statement manipulation, fraud or financial distress (bankruptcy) in a particular stock.

The first tool involves two accruals metrics that measure the stock and flow of accruals.  If the company is aggressively using accruals, and scores above the 95th percentile of our universe, we eliminate it.  Looking over the output from our model, we see that of our 829 stocks, 40 of them fail our accruals test, and are eliminated.  Our second tool is the PROBM model which employs financial statistics to predict the risk of financial statement manipulation and fraud, once again eliminating those falling in the top 5% of our universe.  In our current model, PROBM has identified an additional 39 stocks that are statistically likely to be engaged in manipulation, and we eliminate these from consideration.  Finally, we use our third tool, a logistic regression employing accounting and equity market-based metrics to determine an overall probability of financial distress; once again, we eliminate the top 5% of the output, which in our current model results in the exclusion of an additional 39 stocks.  As a result of our “cleaning” process that eliminates stocks that could cause permanent capital impairment, our total screenable universe has been reduced from 829 stocks to 711 stocks.

Now that we have prepared our universe for further screening by eliminating those stocks that pose the greatest risk of permanently impairing our capital, we are ready to proceed to the next phase of our process: identifying the cheapest stocks.

Look at Cheap Stuff

In our Part 2 post, we discussed the horse race we ran to determine the best metric to use for maximizing the value anomaly, as well as the winner of our race, which was Enterprise Yield using EBIT.  We therefore look to our thoroughbred, Enterprise Yield using EBIT, to sort our universe into deciles.  We are now focused on the cheapest decile, and have further reduced our universe from 711 stocks, to a leaner and cheaper group of 71 stocks.

Find High Quality Cheap Stuff

The next stage of our process involves sorting this very cheap decile of stocks on the basis of their quality.  In our Part 3a and Part 3b posts, we described the two ways we evaluate stocks for quality.

Our first quality assessment tool, discussed in Part 3a, involves a review of the firm’s franchise, as measured by its returns on capital and assets, as well as its margin strength.  Once we have calculated our four franchise metrics, we determine how each individual metric stacks up on a percentile basis versus our universe, and we then take an average of those four percentile scores to determine our overall franchise quality score.  For example, Questcor Pharmaceuticals scores better than 95% of our universe for our four franchise calculations. Now that’s a franchise!

Our second quality assessment tool measures financial strength, and we discuss in Part 3b of our series the various metrics we use, which are closely related to Joseph Piotroski’s F_Score.  Our own financial strength score, the FS_Score, involves 10 separate metrics that measure a firm’s profitability, stability and recent operating improvements.  Once we have measured each financial strength metric and awarded a 1 or a 0 for each, we arrive at a score from 1 to 10, which equates to an overall financial strength score ranging from 10% to 100%.  Returning to our Questcor Pharmaceuticals example, we see that the firm generates an FS_Score of 9, which equates to a financial strength quality score of 90%. Now we take the average of our two individual quality measures to arrive at a composite quality score.  For Questcor Pharmaceuticals, we average the firm’s franchise score of 95%, and its financial strength score of 90%, to arrive at our overall quality score of 93%.

Find High Quality Cheap Stuff

Now that we have sorted our top value decile by quality, we invest in the top 50% of the names, further reducing the universe in which we actually invest to approximately 35 stocks, which are rebalanced yearly.  In order to get a sense for the how our investment methodology integrates into the output of our model, take a look at the output below, which provides summary details relating to the top 10 names generated by our Quantitative Value process.


The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Let’s take a closer look at Questcor Pharmaceuticals.  From the output above, we can see that the company’s market capitalization of $3.0 billion is well above our 40% breakpoint threshold of $1.4 billion.  The company has generated a safety score of 3/3, indicating there are no obvious statistical red flags that would indicate we run a risk of capital impairment via financial statement manipulation, fraud or financial distress.  The company is cheap, with an EBIT/EV yield of 19%, placing it between the median and the 75th percentile of our top decile for cheapness.  Moreover, the company generates a composite quality score of 93%, indicating that it possesses a franchise and is showing strong statistical signs of financial strength.

Questcor is by no means perfect:

And the market is pissed about recent news from Aetna:


But one thing is clear: Questcor has been an excellent firm and it is currently cheap.

How does it work?

Many aspects of the process above involve analytical processes that you might expect to see from a human analyst reviewing whether to invest.  And that is by design.  WT based their quantitative methodology on longstanding value investing principles and on well-established academic research from across the field of finance.  WT have also taken care to be conservative in establishing their methodology, eliminating securities that might skew their results or present real-world trading difficulties, and minimized backtesting risks such as survivorship, look-ahead bias, and delistings.

Now let’s see how the QV strategy looks from a risk and return perspective by reviewing the results of WT backtests, which cover the period 1974 through 2011:


The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

The results speak for themselves.  The simulated quantitative value approach generates a compound annual growth rate of 17.7% over the period 1974 through 2011, dramatically outperforming the S&P 500 index, which returned a 10.5% CAGR.  The outperformance comes with reduced volatility, as is demonstrated by its strong Sharpe ratio of 0.74, versus the S&P’s 0.37.  Its Sortino of 1.18 is also superior to the 0.56 for the market.  The strategy also performed with lower drawdowns, with a maximum drawdown over the period of -32%, versus -50% for the S&P.

We believe that the investing framework WT have outlined above represents a reasonable approach to investing for investors who want to preserve capital and beat the markets over the long-term. In summary, we already knew that fundamentals-based investing worked, but now WT have confirmed that systematic value investing works as well.

This concludes our 4-part series covering a systematic value investing approach to identifying low risk, high quality, undervalued stocks that generate market beating returns.  We hope you have enjoyed walking through the different parts of our model.

If you want to dig a LOT deeper, you can read an in-depth treatment of the quantitative value process in a book co-written by Wesley R. Gray Ph.D., along with Toby Carlisle at The book will be available in December.

Pre-order today!

  • The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. Our full disclosures are available here. Definitions of common statistics used in our analysis are available here (towards the bottom).
  • Join thousands of other readers and subscribe to our blog.
  • This site provides NO information on our value ETFs or our momentum ETFs. Please refer to this site.

Print Friendly, PDF & Email

About the Author:

David Foulke
Mr. Foulke is currently an owner/manager at Tradingfront, Inc., a white-label robo advisor platform. Previously he was a Managing Member of Alpha Architect, a quantitative asset manager. Prior to joining Alpha Architect, he was a Senior Vice President at Pardee Resources Company, a manager of natural resource assets, including investments in mineral rights, timber and renewables. He has also worked in investment banking and capital markets roles within the financial services industry, including at Houlihan Lokey, GE Capital, and Burnham Financial. He also founded two technology companies:, an internet-based provider of automated translation services, and, an online wholesaler of stone and tile. Mr. Foulke received an M.B.A. from The Wharton School of the University of Pennsylvania, and an A.B. from Dartmouth College.
  • Philwhittington

    This is awesome, but I have a question.

    Pre-ordering the book – it’s $51.41 – that seems pretty expensive! Any chance of a cheap kindle version?

  • anonymoose

    Very interesting stuff, I’m looking forward to the book. Got it on my wishlist on amazon.

    One key point on the tradability of this approach are the periods of underperformance compared to broad indices. A quick look at the backtest tool shows that from the 2009 lows to now, if one were to go long your value approach and short the S&P 500, they would be down over 30%. Even if you did the long-only approach, it would be quite painful to keep investing like this when you underperform the market for a long time. Of course you can say that the horizon is much longer than that, but it’s still an issue.

    Do you think that there is any regularity in this that can be exploited? i.e. value underperforms for some time after the market is undervalued as a whole (rebounds after recessions)?

  • Steve

    100% rolling 5 year wins….I don’t think you can ask better than that! From my reading, the wrong approach is to look for a strategy that hardly ever under performs a benchmark. My opinion follows: the right strategy is a philosophy of accepting the periods of under performance. In fact, it’s my belief that this is what helps keep these strategies (like value and momentum) profitable. There are times when investors simply get thrown off the roller coaster (of the strategy). In my opinion, the best investor wins, and to be the best investor – a part of that is being the best ‘loser’. Being the best at handling the tough times. Anyway, that’s the conclusion I’ve reached.

    However, again from my own reading of academia and practitioner’s works, it seems to me to be pretty conclusive that you can mitigate some of the periods of under-performance by not being loyal to the value or momentum school of investing. By being a rebel and mixing the 2 strategies with an edge, you don’t harm the return, and you reduce some of the periods of under-performance (of either strategy). Sometimes they’ll both under perform, but mostly one will be over and the other will be under performing.

    In my opinion, this is a better approach than attempting to reduce periods of under performance by tinkering within the strategies themselves. I see quite a bit of talk in online places I visit about “market filters”, going to cash etc when the market is off. There are times when that will work, e.g. the current period where we have the GFC period. Previous periods perhaps not. I personally think that people playing with these sorts of market filters are possibly in dangerous territory, as they are weighting their testing too heavily towards recent market history. Having said that, some have shown long term moving averages over the market to improve things (not from a performance point of view, but a volatility point of view). When I start playing with these ideas however, I start to become aware that I’m introducing more variables and am at risk of introducing error into my strategy.

    Momentum wins (over time). Value wins (over time). Heck, even the market wins for that matter. I’d rather mix the winning strategies than attempt to overcome a perceived weakness in the strategies (periods of under performance) when the very attempt may be ruining what actually makes them winning strategies in the first place! If that makes sense?

    Having said all that…if you wanted to go down this path and were investing globally, you might want to check out research on the CAPE (cyclically adjusted PE) ratio, especially if you are more comfortable with a value approach. If a countries’ market is “forecast” to do well going forward, perhaps you could focus your stock selection there. Currency risk?

    PS. Apologies for how many times I managed to say, “in my opinion” but I’m just well aware that everyone has to reach their own conclusions in this game.

  • agreed. We are going to work with the publisher on this issue. I know they are doing a kindle version as well…stay tuned.

  • One issue with value is that it doesn’t win EVERY YEAR. The goal of the book was to simply figure out how to optimize a quant value system, but in the end, a value system is a value system.

    We’ve done a lot of work trying to figure out how to ‘time’ value and we’ve developed a tactical asset allocation model overlay, but this is something done via Empiritrage, LLC and is not generally available to the public.

  • Ya never know. Jeremy Siegal wrote his book “stocks for the long run” in 2007 and showed that there wasn’t a single period since 1880 where stocks underperformed bonds if you had a 10 or a 15 yr horizon (I can’t remember the details). Of course, if he published the book a few years later, we would have seen the first time ever that over a long-term rolling period bonds beat stocks. Evidence can sometimes be compelling–and often does provide some indication of the future–but not always.

  • Steve

    Looking forward to the book!
    Just wondering – did you guys monte carlo test the strategy?

  • bensonq

    Hi guys. I just discovered this blog and am very excited for this book. I’m not sure if you cover this in the book (I will be reading) but I wondered if you guys have opinions or data on two O’Shaughnessy What Works on Wall Street claims that have been disputed in other papers
    1) Adding a momentum factor to stock selection by adding a best recent performance screen to value screens improves results. (suggesting you are buying value when it is being recognized as opposed to abstractly) True or False in your opinions?
    2) Turning over a portfolio on faster time frames, like 3-6 months outperforms holding for one year if one is willing to do the work and taxes are not an issue.
    Thanks so much and keep up the great work,

  • Hi Ben,

    1. Momentum is a pervasive factor that is hard to explain away. Integrated correctly, momentum can probably add some “value” on value.
    2. Yes, more turnover enhances returns, but at the expense of transaction costs and taxes.

  • bernhardf

    Can you explain something about the turnover of your model?
    As you stated in your book, the portfolio will be rebalanced every year. So if a stock is still in your best decile it will be for one more year correct ?
    What is than your turnover?
    And will for instance, a longer timeframe (2-5 years) enhance returns ?

    Thank you !

  • Rebalance is annual and turnover is usually around 75%–25% of the names don’t change; 75% are new. Each june 30 the portfolio is rebalanced to hold the top half of quality firms from the top decile of firms ranked on cheapness. For example, if there are 100 names in the top decile of cheapness, the portfolio for that year would contain 50 names.
    Different rebalancing periods change returns. We have not looked at rebalance periods longer than 1yr. We have looked at higher frequency rebalance periods–there are definitely ways to enhance returns via faster rebalancing, but there are also significant tax consequences.

  • bernhardf

    So each year are 75% new in the portfolio.
    What do you mean with “turnover is usually around 75%–25% of the names don’t change” ?

    thank you!

  • Joe

    Have any of the higher frequency rebalancing periods at which you’ve looked offered significantly higher returns than your one year period, such that there may be a benefit for low commission, tax-deferred accounts?

  • Joe

    I’m also wondering if there is a reason you comprise your portfolio of the top half of quality firms ranked on cheapness, rather than a different quantity. Is that that number that seems to offer the best returns, or is it for diversification purposes, or something different?

    Thanks so much for all this work. It’s very interesting seeing how you’ve expanded the search for a value based systematic approach.