The Quantitative Value Investing Philosophy

The Quantitative Value Investing Philosophy

Benjamin Graham, who first established the idea of purchasing stocks at a discount to their intrinsic value more than 80 years ago, is known today as the father of value investing. Since Graham’s time, academic research has shown that low price to fundamentals stocks have historically outperformed the market. In the investing world, Graham’s most famous student, Warren Buffett, has inspired legions of investors to adopt the value philosophy. Despite widespread knowledge that value investing generates higher returns over the long-haul, value-based strategies have continued to beat the market. How is this possible? The answer relates to a fundamental truth: human beings behave irrationally. We follow an evolutionary mindset that focuses on surviving in the jungle, not optimizing our 401k portfolio. While we will never eliminate our survival instincts, we can minimize their impact by employing quantitative tools.

“Quantitative,” is often considered to be an opaque mathematical black art, only practiced by Ivory Tower academics and supercomputers. Nothing could be further from the truth. Quantitative, or systematic, processes are merely tools that value investors can use to minimize their “survival” instincts when investing. Quantitative tools serve two purposes: 1) protect us from our own behavioral errors, and 2) exploit the behavioral errors of others. These tools do not need to be complex, but they do need to be systematic. The research overwhelming demonstrates that simple, systematic processes outperform human “experts.” The inability of human beings to robustly outperform simple systematic processes also holds true for investing, just as it holds true for most other fields.(1)

Much of the analysis conducted by value investors—reading financial statements, interpreting past trends, and assessing relative valuations—can be done faster, more effectively, and across a wider swath of securities via an automated process. Gut-instinct value investors argue that experience adds value in the stock-selection process, but the evidence doesn’t support this interpretation.(2) Why? When value investors respond to non-quantitative signals (e.g., the latest headlines on MSNBC, their expert friend’s opinion at the cocktail party, etc.), they unconsciously introduce cognitive biases into their investment process. These biases lead to predictable underperformance.  Alpha Architect’s Quantitative Value (QV) philosophy is best suited for value investors who can acknowledge their own fallibility. Granted, our approach is not infallible, and should always be questioned; however, the approach seeks to deliver the following: a systematic, evidence-based, value-focused investment strategy that is built to beat behavioral bias.

Note: For those who want to dive right into the specifics of the Quantitative Value Indexes, information is available below:

An Introduction to the Quantitative Value Index

When we set out to develop our Quantitative Value (QV) approach we had one mission in mind:

  • Identify the most effective way to systematically capture the value premium.

Our mission involved two core beliefs:

  • Value investing works over the long haul because the strategy is highly volatile
  • There is a mispricing component of the value premium that is caused by an overreaction to negative fundamentals.
  • To extract the highest expectation from the value premium, the portfolio needs to be focused (i.e., 50 stocks or less) and not a closet index.

After a decade of value investing research, rewrites, and regressions, our comprehensive findings on systematic value investing were published in our book, Quantitative Value.

The book has been well received by the investment community, for example:(2)

This book is an excellent primer to quantitative investing…

Alex Edmans, Ph.D., Associate Professor of Finance, The Wharton School, University of Pennsylvania

Quantitative Value is a must read for those with a love of value investing and a desire to make the investment process less ad-hoc.

Tony Tang, Ph.D., Global Macro Researcher and Portfolio Manager, AQR Capital Management

Gray and Carlisle take systematic value-based investing to the next level.

Raife Giovinazzo, Ph.D., CFA, Research Analyst in Scientific Active Equity, Blackrock


What resulted from our research is a reasonable, evidence-based approach to systematic value investing. Others agreed with us. In 2012, Alpha Architect partnered with a multi-billion dollar family office and sophisticated investors to turn our theoretical QV approach into reality. We spent several years building the operational infrastructure needed to ensure a smooth transition from academic theory to real-time performance. In the end, we distilled our entire process into an index that reflects five core steps (depicted in the figure below):

  1. Identify Universe: Our universe generally consists of mid- to large-capitalization U.S. exchange-traded stocks.
  2. Remove Outliers: We conduct financial statement analysis with statistical models to avoid firms at risk for financial distress or financial statement manipulation.
  3. Screen for Value: We screen for stocks with low enterprise values relative to operating earnings.
  4. Screen for Quality: We rank the cheapest stocks on their long-term business fundamentals and current financial strength.
  5. Investment with Conviction: We seek to invest in a concentrated portfolio of the cheapest, highest quality value stocks. This form of investing requires disciplined commitment, as well as a willingness to deviate from standard benchmarks.

Step 1–Identify the Investable Universe: Mid- and Large-Caps

The first step in the QV investing process involves setting boundaries on the universe for further screening.  There are several reasons we place such limits around the stocks to consider.  A critical aspect involves liquidity, which is related to the size of the stocks under consideration.  In general, if we include stocks that are too small, the possibility of large price movements on small volume is a real risk. Ignoring liquidity leads to overstated backtests relative to actual returns.  In other words, if we include small stocks in our universe, the back-tested results may generate phenomenal returns, but these returns are likely unobtainable in the real world, even when operating with small amounts of capital.

In order to honestly assess and implement the QV approach, we eliminate all stocks below the 40th percentile breakpoint of the NYSE by market capitalization. As of December 31, 2013, the 40th percentile corresponded to a market capitalization of approximately $2 billion.  Our universe also excludes ADRs, REITS, ETFs, financial firms, and others that present various data challenges incompatible with the QV approach.(3) Another requirement is that the firms we analyze have an adequate number of years of data to draw from, as some of the QV metrics require that we analyze financial data over the past eight years.

In summary, our investment universe contains liquid, non-financial companies with at least eight years of public operating history.

Step 2–Remove Outliers: Look for Red Flags

As noted value investor Seth Klarman has advised, “Loss avoidance must be the cornerstone of your investment philosophy.” This is an important concept, and underlies the first phase of our approach. As an initial criterion for making a successful investment, we seek to eliminate those firms that risk causing permanent loss of capital.  

Permanent loss of capital can come in many forms, but we bucket these risks into two basic categories: manipulation/fraud and financial distress (e.g., bankruptcy).

We leverage some tools that can help us identify “Red-Flag” firms:

  1. Accrual red flags
  2. Predictive models


Accrual Red Flags

Our first set of tools calculate measures related to accruals. Bernstein succinctly states the problem with accruals:(4)

CFO (cash flow from operations), as a measure of performance, is less subject to distortion than is the net income figure. This is so because the accrual system, which produces the income number, relies on accruals, deferrals, allocations and valuations, all of which involve higher degrees of subjectivity than what enters the determination of CFO. That is why analysts prefer to relate CFO to reported net income as a check on the quality of that income. Some analysts believe that the higher the ratio of CFO to net income, the higher the quality of that income. Put another way, a company with a high level of net income and a low cash flow may be using income recognition or expense accrual criteria that are suspect.

As Bernstein states, the problem with accruals is that they open the door for potential financial statement manipulation. A range of academic research has tested the hypothesis that investors fail to appreciate the importance of accrual measures and their impact on stock returns.(5)  We have leveraged this research to develop our own forensic accounting tools that use various accrual metrics to identify potential manipulation and subsequently eliminate these firms from our investment set. We look at extreme accruals and balance sheet bloat to capture red flags that might be associated with accruals.

We specifically target the following measures:

  • STA = (net income – cash flow from operations) / Total Assets (see Sloan 1996)
  • SNOA = (Operating Assets – Operating Liabilities) / Total Assets (see Hirshleifer et al. 2004)

Predictive Models

Another set of tools we use involves statistical prediction techniques. Implementation of these models is highly technical, but the mechanism is intuitive. An example helps illuminate the process. Consider the case of financial statement manipulation: We hypothesize that high accruals, lots of leverage, rapidly changing financial statement ratios, and rapid sales growth might be related to manipulation. The problem is understanding how these variables are related.  To build our solution, we need take two steps: 1) Identify a group of firms that manipulated their financial statements in the past, and 2) use statistical techniques to identify the relationship between the manipulator and the variables we think matter. Finally, we test our model on another sample of manipulator firms and examine if the model has any “out-of-sample” prediction ability. If the model works, it will predict, with a success rate better than chance, if a firm has manipulated financial statements. This process is essentially a simple version of “machine learning.” While this process sounds complicated, the procedure outlined is followed by academic researchers who have identified effective ways to pinpoint manipulation and financial distress.(6) We leverage these studies, and our own internal research, to develop prediction models that identify problematic firms and eliminate them.

The models we use are as follows:

Minimize Garbage

The final step is to remove all firms in the universe that sit in the bottom 5% percentile ranks on any of the measures mentioned above. The graphic below depicts the high-level process:

Step 3–Value Screens: What Value Metric Performs the Best?

Step 1 and Step 2 identifies a universe that we can analyze. On average, we are left with a universe of 800 publicly traded common stocks (US) that are large and liquid. Most importantly, they  do not reveal statistical “Red Flags” imply questionable accounting or impending loss of capital.

In Step 3, we screen for the cheapest stocks. Ben Graham long ago recognized the importance of paying a low price for stocks. Graham’s “value anomaly,” or the significant outperformance of low price-to-fundamental stocks relative to high price-to-fundamentals, is now well-established in the academic and practitioner communities alike. However, practitioners continually tinker with this conclusion in order to create a better mousetrap. Typically, these ad-hoc adjustments include measures such as low price-to-earnings, low price-to-book value, dividends, etc. Not a week goes by when we aren’t solicited with a hot, new metric to test against our approach. We sought to provide a comprehensive answer to this debate. In the Journal of Portfolio Management, we published a peer-reviewed assessment of the best valuation metric(s) available. Simply stated, which measure of value works best for identifying stocks most likely to outperform?(7)

We reviewed historical stock market returns against a myriad of value strategies. More importantly, we directly tested them against one another in a quantitative horse race. The “horses” in our race were the following valuation metrics:

  • E/M – Earnings to Market Capitalization: The E/M ratio is simply a firm’s earnings divided by its total market capitalization.(8)
  • EBITDA/TEV – Enterprise Multiple: Employed extensively in private equity, this is simply a firm’s earnings before interest, taxes, depreciation and amortization (EBITDA) divided by its total enterprise value (TEV).(9)
  • FCF/TEV – Free Cash Flow Yield: The numerator for this metric is Free Cash Flow, which is net income + depreciation and amortization – working capital changes – capital expenditures. Once again, total enterprise value (TEV) is in the denominator.
  • GP/TEV – Gross Profits Yield: Revenue – cost of goods sold in the numerator (GP), and total enterprise value (TEV)in the denominator.
  • B/M – Book-to-Market: The book value of a firm divided by the firm’s market value (an academic favorite).

Our conclusion? Enterprise multiples are arguably the most effective metric to capture the so-called value premium. But don’t take our word for it.

Loughran and Wellman (2009) make the following claim regarding enterprise multiples:

…the enterprise multiple is a strong determinant of stock returns

Walkshäusl and Lobe (2015)  conduct the analysis of enterprise multiples on international markets and conclude the following:

return predictability is pronounced in developed and emerging markets…

Based on the evidence, it would appear that EBITDA/TEV is the best-performing price metric in terms of both raw returns as well as on a risk-adjusted basis. Now, we aren’t necessarily wedded to EBITDA/TEV.. In fact, all the valuation-based metrics beat the benchmark; however, we like enterprise multiples because they represent the valuation metric that a private company buyer would use to assess an investment opportunity. As Benjamin Graham, the intellectual founder of the value investment philosophy, states in his classic text, The Intelligent Investor, “Investment is most intelligent when it is most businesslike.”(10)

Moreover, we have conducted our own formal investigation into why enterprise multiples “work the best,” at least historically.

To ascertain whether the EBIT/TEV value factor is attributable to risk or mispricing we set up the following experiment:(11)

  • Break the cheapest and most expensive EBIT/TEV portfolio into different buckets based on their predicted mispricing (using a variety of measures).
  • We create two portfolios:
    • High Predicted Mispricing: Long a portfolio of cheap high mispricing and expensive high mispricing
    • Low Predicted Mispricing: Long a portfolio of cheap low mispricing and expensive low mispricing

If EBIT/TEV is a risk-based measure, the difference in the performance of the predicted “high mispricing” and “low mispricing” portfolios should be insignificant because mispricing doesn’t drive performance, risk does. However, if there is a difference in these two portfolios, the results suggest that mispricing arguably drives the premium.

Our research suggests that the Enterprise Multiple (EM) effect can be attributed to mispricing, and not due to higher systematic risk. Although we will not deny that higher risk likely plays some role in the higher expected returns. Here is a figure highlighting the core conclusion when it comes to the enterprise multiple effect:(12)

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

To summarize, the enterprise multiple metric seems to capture a higher degree of systematic mispricing then the multiples similar cousins, book-to-market, earnings-to-market, and so forth.

In our index methodology, we use a variation on the enterprise multiple as part our valuation screening technology (EBIT/TEV), and screen our universe from Step 1 and Step 2 down to the ten percent cheapest stocks based on EBIT/TEV. This screen ensures we are dealing with a subset of firms that are sitting in the “bargain bin” at our universe.

Step 4–Quality Screens: Quality Differentiates Cheap Stocks

After “cleaning” our liquid universe (Step 2) and zeroing in on the “bargain bin” of the cheapest stocks (Step 3), we move onto Step 4 of our investment process. Step 4 addresses a simple concern: How do we separate cheap stocks that may be cheap for a good reason (junk) from cheap stocks that are fundamentally mispriced (good value)?

Academic research highlights fundamental analysis (often referred to as “quality” metric analysis) can help differentiate among the winners and losers when sifting through the cheap stock bargain bin. For example, Piotroski and So (2012), make the following statement:

…a simple financial statement analysis-based approach can identify mispricing embedded in the prices of value firms.

Here is an annotated figure from their research that highlights their key point:

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

The black bars reflects a portfolio that captures the generic value premium: long cheap stuff; short expensive stuff. The solid black line represents the portfolio that is long cheap quality and short expensive junk; the dotted line is long cheap junk and short expensive quality. Under the risk-based explanation for the value premium, all three strategies should be roughly the same. However, the evidence suggests that fundamental analysis, or “quality” metrics, can help a value investor improve their results. With this knowledge, we add two quality screens to our systematic value process:

1) Long-Term Business Strength
2) Current Financial Strength

Long-Term Business Strength

In thinking about Long-Term Business Strength, or “economic moat,” we turn to the Sage of Omaha for guidance. Warren Buffett looks for businesses with enduring competitive advantage and sustainable earnings power (above and beyond their competitors). What does that competitive advantage look like? A firm might manufacture goods at a lower cost, provide a product for which there are no direct substitutes, or represent a trusted brand that keeps customers coming back. These types of advantages, and others like them, are the collective “moat” that allow companies to raise the drawbridge and defend market share from the competition.

As quantitative investors, we are not focused on understanding the details of any particular moat. Instead, we want to objectively identify which metrics are appropriate for assessing an economic moat’s strength. One key feature of economic moats is that they enhance the profitability of investments, which allows the firm to generate above-average returns on invested capital. Any business with a wide moat, therefore, requires lower rates of reinvestment to maintain or grow existing production capacity, leaving additional capital that can be distributed to owners without affecting the company’s future growth. Thus, investment profitability can be used to identify companies with economic moats.

In assessing an economic moat, we are particularly interested in high returns that are sustained over a full business cycle. To do so, we use eight years for our long-term average calculation, as this captures a typical boom-bust business cycle. We use three metrics that help us identify statistical evidence for an economic moat: Long-term free cash flow generation; long-term returns on capital; and long-term margin characteristics.

Granted, an Economic moat is a valuable quality signal, but it only represents one leg of our fundamental analysis. We must also be certain that the cheap stocks under consideration have some level of current financial strength

Current Financial Strength

We introduce the notion of financial strength with an analogy. Suppose you had to sail across the Atlantic and were given a choice between making the crossing in either an eight foot sailing dinghy, or a 60 foot yacht.  Which would you choose?  Obviously, you would want the safety and security afforded by the larger, more seaworthy yacht.  The same concept holds when deciding upon the stocks to include in your portfolio: all things being equal, an investor should seek out those stocks that are less vulnerable to downturns or other macroeconomic shocks.

We know intuitively why a durable 60-foot yacht protects sailors better than a fragile dinghy: its heavy keel keeps it stable, it won’t roll violently in heavy winds, and it can take a pounding by waves.  What are the financial characteristics that enable a firm to protect capital during a stormy business climate or from unanticipated developments?  Several years ago, Joseph Piotroski, a specialist in accounting-based fundamental analysis, and currently a professor at Stanford, did some interesting analysis relating to this subject. He used a nine-point scale, utilizing common accounting ratios and measurements, to evaluate the financial strength of companies and eliminated those most at risk of financial distress.  This scale, which he called the “F_SCORE,” involved financial statement metrics across several areas: profitability, leverage, liquidity and source of funds, and operating efficiency.  The results were nothing short of astonishing: Piotroski found that a value investment strategy that bought expected winners and shorted expected losers generated a 23 percent annual return between 1976 and 1996—a record of which even Buffett would be proud. (13)

As Sir Isaac Newton noted, “If I have seen further, it is by standing on the shoulders of giants.” We also believe in standing on the shoulders of giants whenever possible since, as Newton observed, you can see so much farther. We therefore use Piotroski’s F-SCORE as a basis for our approach to measuring current financial strength, but with some improvements. Here is a simple outline of our current financial strength 10-point checklist:

  1. Current profitability (3 items)
  2. Stability (3 items)
  3. Recent operational improvements (4 items)

The current financial strength score reduces the overall financial health of a firm to a single number between 0 and 10, which can be used as a basis for comparing a firm’s overall financial strength versus that for other firms.

Integrating Price with Quality

For both aspects of quality–Long-Term Business Strength and Current Financial Strength–we tabulate thousands of data points based on the principles discussed above and derive quality scores for all firms in our cheap universe identified in Step 3.

Here is a breakdown of the metrics that go into our quality assessment and how they are weighted:


We sort our cheap universe on our composite quality score to identify a universe of what we believe are the cheapest, highest-quality value firms.

Step 5–Invest with Conviction: Focused Value Factor Exposure

Steps 1 through 4 systematically seeks to identify the cheapest, highest quality value stocks. We believe that this portfolio of stocks has the highest probability of capturing the value premium over the long-term.

But one question remains: How do we construct our final QV portfolio?

Charlie Munger, at the 2004 Berkshire Hathaway Annual Meeting, is quoted as saying, “The idea of excessive diversification is madness…almost all good investments will involve relatively low diversification.” Another word for Munger’s issue with diversification for a skilled manager is “diworsification.” Elton and Gruber, professors with multiple papers and books on the subject of diversification, highlight that the benefits to holding a bigger portfolio of securities decline rapidly after a portfolio grows beyond 50 securities.(14)

So while we are protected by diversification, we don’t want too much. Jack has a nice post that talks directly to the Elton and Gruber findings in the context of value investing. Moreover, Charlie Munger is correct: to the extent you believe you have a reliable method of constructing a high alpha “active” portfolio, less diversification is desirable.

In the spirit of less diversification (aka “high conviction”), we construct our index to have around 40-50 securities. Consider a hypothetical illustration of our screening process, which roughly reflects our experience managing our index in the real-world:

  1. Identify Investable Universe: We typically generate 900 names in this step of the process.
  2. Forensic Accounting Screens: We usually eliminate 100 names, bringing the total to 800 stocks.
  3. Valuation Screens: Here we screen on the cheapest 10% of the universe, or 80 stocks.
  4. Quality Screens: We calculate a composite quality score and eliminate the bottom half, leaving 40 stocks.
  5. Invest with Conviction: We invest in our basket of 40 stocks that are the cheapest, highest quality value stocks.

Our index has the following construction details:

  • Equal-weight
  • Quarterly rebalanced (international is semi-annually rebalanced)
  • 25% sector/industry constraint
  • No financials
  • Pre-trade liquidity requirements

We don’t like to emphasize historical performance, because we believe the process is paramount. However, if you’d like to see the hypothetical performance we suggest you review our index educational materials, which are available here.

Why Isn’t Everyone a Concentrated Systematic Value Investor?

We feel we have identified a reasonable systematic value investing approach that will capture a large value premium over time. But while all of this may sound promising, one must consider a simple question:

If this is so easy, why isn’t everyone doing it?

The easy answer is that most investors aren’t insane. Value investing works because it is risky and painful. There is no way around this basic fact. Investors who follow our index must buy stocks that probably make them uneasy, and almost all of our portfolio holdings have business problems that are lamented by the Wall Street Journal and CNBC day in and day out. Some of these problems will actually play out in the future and the index will lose money on these positions. However, on average, we believe these lamentations will never be as bad as initially advertised and the index will benefit, in the aggregate, when expectations revert to normal.

Nevertheless, the road will be bumpy, full of volatility, and is not for everyone.

Consider the experience of a systematic value investor who simply buys low-priced stocks. Our approach, while not exactly the same as a simple low-price value strategy, shares many of the same characteristics—both good and bad—so this thought experiment serves as a nice case study to contextualize the costs and benefits of contrarian investment programs.

Using data on portfolios sorted by book-to-market ratios, we examine time periods where it was painful to be a value investor.

One such period is during the run-up to the internet bubble. We examine the gross total returns (including dividends and cash distributions) from 1/1/1994-12/31/1999 for a Value portfolio (High book-to-market decile, market-weighted returns, FF_VAL), and a Growth portfolio (Low book-to-market decile, market-weighted returns, FF_GROWTH), the S&P 500 total return index (SP500), and the 10-Year Treasury Total Return index (10-Year).(15)

The figure below highlights the extreme underperformance of the simple value portfolio relative to a simple growth portfolio and the broader market. From 1994 to 1999, value underperformed growth by almost 7 percentage points a year. Now that’s pain! When one compounds that spread over 5 years it translates into a serious spread in cumulative performance.

Summary Statistics FF_VAL FF_GROWTH SP500
CAGR 19.68% 26.51% 23.83%
Standard Deviation 14.60% 16.17% 13.63%
Downside Deviation (MAR = 5%) 12.52% 11.02% 10.50%
Sharpe Ratio (RF=T-Bills) 0.98 1.25 1.30

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

The figure below makes the point even more clear. The value strategy underperforms the broad market for 5 out of 6 years.

1994 -4.88% 1.62% 1.35% -4.02%
1995 40.77% 36.32% 37.64% 22.97%
1996 19.54% 20.84% 23.23% 2.08%
1997 31.09% 31.74% 33.60% 10.29%
1998 28.29% 41.76% 29.32% 11.55%
1999 9.12% 31.05% 21.35% -4.20%

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Would  you retain your financial advisor if they underperformed for 6 years straight? Most would not. Even the most disciplined and hardened value investor would have a hard time staying disciplined to a philosophy that lost to the market for almost 6 years in a row. Warren Buffett, arguably the greatest investor of all-time, was criticized in the media for “losing his magic touch” at the tail-end of the late ‘90s bull market.[ref][/ref]

Of course, looking back, we now realize that in 1999 the internet bubble was about to burst. Value investors got the last laugh over the next 6 years. From 2000 to 2006 value stocks earned 13.00 percent a year relative to the market’s paltry -4.57 percent/year performance. Here are the annuals:

2000 6.67% -21.03% -8.34% 3.14%
2001 10.98% -18.69% -11.88% -10.80%
2002 -17.75% -24.24% -21.78% -5.14%
2003 59.58% 22.64% 28.72% 26.68%
2004 16.66% 6.38% 10.98% 15.70%
2005 7.39% 3.92% 5.23% 10.24%
2006 20.79% 9.27% 15.69% 14.82%

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Over the full cycle from 1994 to 2006, value came through: Value earned 16.03 percent a year, while the market earned 8.69 percent a year. An investor compounding at a 2.03 percent spread over the market return over nearly twenty years will generate a substantially different wealth profile over time. The figure below shows the performance of the simple low-price value strategy relative to the market from 1994 to 2006:

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Since 2006, value has entered the pain trade. Now for a 10 year stretch (the figures below are from 2007 to 2016).

Summary Statistics* FF_VAL FF_GROWTH SP500
CAGR 3.30% 8.98% 7.09%
Standard Deviation 28.50% 15.80% 15.22%
Downside Deviation (MAR = 5%) 20.36% 12.01% 11.77%
Sharpe Ratio (RF=T-Bills) 0.23 0.58 0.48

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Will value ever come back? Who really knows. But history suggests that the value premium is often captured by those with the ability to take on the most pain.

Conclusions Regarding the Quantitative Value Process

In the short-run, most of us simply cannot endure the pain that value investing strategies impose on our portfolios and our minds. For those in the investment advisory business, providing a strategy with the potential for multi-year underperformance is akin to career suicide. And yet, at Alpha Architect, we explicitly focus on building our Quantitative Value Indexes based on our systematic value investing philosophy. Clearly, these indexes are not for everyone. However, our hope is that we can educate investors with the appropriate temperament on what it takes to achieve long-term investment success as a value-investor. The single most important factor is sticking to a value investment philosophy through thick and thin. Our systematic value investment process facilitates our ability as investors to simply “follow the model” and avoid behavioral biases that can poison even the most professional and independent fundamental value investors.

We believe value investing works over the long-haul. Benjamin Graham distilled the secret of sound value investment into three words: “margin of safety.” We’ve focused on the behavioral aspects that drive value investing and taken Graham’s original motto a bit further. Our enhanced process can be distilled into the following:

We seek to buy the cheapest, highest quality value stocks.

— Wesley R. Gray and Jack R. Vogel, co-CIOs Alpha Architect

Information on our Quantitative Value Indexes is available here.

Here are some specific research/educational materials:

The Quantitative Value book, co-written with Toby Carlisle, outlines the details associated with steps 2, 3, and 4 if you’d like to learn more about the process.

  • The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. Our full disclosures are available here. Definitions of common statistics used in our analysis are available here (towards the bottom).
  • Join thousands of other readers and subscribe to our blog.
  • This site provides NO information on our value ETFs or our momentum ETFs. Please refer to this site.

Print Friendly, PDF & Email

References   [ + ]

1. Grove, W., Zald, D., Lebow, B., and B. Nelson, 2000, “Clinical Versus Mechanical Prediction: A Meta-Analysis,” Psychological Assessment 12, p. 19-30.

The recommendations are directed towards the quality of the book and are not an endorsement of advisory services provided by Alpha Architect, LLC or affiliates. Alpha Architect does not know if the recommenders approve or disapprove of its services. The recommendations were chosen from a list of formal recommendations based on if the author had a PhD or not.

3. The elimination of financial firms is due to Step 2 of the Quantitative Value process, mainly due to the leverage of financial firms
4. Bernstein, L. 1993. Financial Statement Analysis. 5th ed. Homewood, IL: Irwin.
5. Examples include Sloan, 1996, “Do Stock Prices Fully Reflect Information in Accruals and Cash Flows about Future Earnings?” Accounting Review 71, p. 289-315 and Hirshleifer, Hou, Teoh, and Zhang, 2004, “Do Investors Overvalue Firms with Bloated Balance Sheets?” Journal of Accounting and Economics 38, p. 297-331.
6. Beneish, M. D, 1999, The detection of earnings manipulation, Financial Analysts Journal, 55(5), 24-36 and Campbell, Hilscher, Szilagyi, 2011, Predicting Financial Distress and the Performance of Distressed Stocks, Journal of Investment Management 9, p. 14-34.
7. Jack Vogel and I have a formal paper on this subject, “Analyzing Valuation Measures: A Performance Horse Race over the Past 40 Years,” published in The Journal of Portfolio Management 39, p 112-121.
8. Note the E/M is the inverse of the more commonly referenced P/E ratio.
9. Total Enterprise Value (TEV) can be thought of as the price an outside buyer would need to pay to buy the entire firm — the buyer would need to buy all the equity and the debt, but would receive back any cash the company has on hand. Formally, we measure TEV as follows: TEV = Market Capitalization + Short-term Debt + Long-term Debt + Preferred Stock Value – Cash and Short-term Investments.
10. Graham, B. 1993. The Intelligent Investor. 4th Revised Edition. New York, NY: Harper & Row Publishers.
11. here is a summary of the paper
12. The image below is a visualization of the results from Table 2 of a working research paper by Crawford, Gray, Vogel, and Xu (accessed 6/1/17)
13. Piostroski, J., 2000, “Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers,” Journal of Accounting Research 38, p. 1-41.
14. Elton, E. and Martin Gruber, 1977, Risk Reduction and Portfolio Size: An Analytical Solution, The Journal of Business 50, p 415-437.
15. Bloomberg and Ken French Website:

About the Author:

After serving as a Captain in the United States Marine Corps, Dr. Gray earned a PhD, and worked as a finance professor at Drexel University. Dr. Gray’s interest in bridging the research gap between academia and industry led him to found Alpha Architect, an asset management that delivers affordable active exposures for tax-sensitive investors. Dr. Gray has published four books and a number of academic articles. Wes is a regular contributor to multiple industry outlets, to include the following: Wall Street Journal, Forbes,, and the CFA Institute. Dr. Gray earned an MBA and a PhD in finance from the University of Chicago and graduated magna cum laude with a BS from The Wharton School of the University of Pennsylvania.


  1. janvrots October 9, 2014 at 12:48 pm

    How do your returns contrast with this tweak to finding good quality stocks in the bargain bin?
    1) Rank the universe using the qualitative score,
    2) Identify top 40 stocks using quality score
    3) Select the cheapest 20 stocks from this 40 stock basket

    Love your website and your approach to the market

    • Wesley Gray, PhD October 9, 2014 at 3:25 pm

      A lot worse. Quality is a marginal stand-alone factor. Price is everything. If you aren’t fishing in the bargain bin, you aren’t fishing in the bass pond with the trophies.

      • janvrots October 9, 2014 at 7:53 pm

        Thanks. Can you share some numbers contrasting the two strategies?

        • Jack Vogel, PhD October 9, 2014 at 9:55 pm

          We ran the numbers, and although adding a “price” screen adds value (after ranking on quality) in the past, we found that ranking on price and then quality worked better. Glad you like the website!

          • janvrots October 10, 2014 at 4:47 am

            As a quant we dream of models which dont have periods of underperformance. I am sure there are periods where the “quality first” model outperforms the traditional qv model. I wonder if there is any information in this. Possibly a signal that could help determine bet size. I am under the impression that you guys have the data, click a button and 10 seconds later the results are in!!

          • Wesley Gray, PhD October 10, 2014 at 8:24 am

            Definitely. No system or strategy works in every period and across all time. Those systems are called Bernie Madoff systems. 🙂

            We try and avoid pitching our specific strategies on the blog. Especially as it relates to backtested data that uses the actual algorithms we deploy in our business. We’d love to share more, but compliance issues prevent us from being as open as we’d like to be in a public forum. I’m sure you understand.

          • Menno Dreischor January 27, 2015 at 12:18 pm

            Dear Dr. Gray, I find the difference between 10.88% for value investing and 9.45% for the overall market to be rather small, especially considering the volatility in the returns. Assuming yearly returns to be normally distributed (a bit of a stretch I know, although the distribution between 1926 and 2008 is pretty close to normal) and volatility to be equal for both, the difference using a standard t-test is simply not statistically significant. So, although your argument is pretty convincing the numbers appear to show a different picture, namely that there is no evidence that value investing (the simple form you use as an example) actually works, at least across this time frame.

          • Wesley Gray, PhD January 28, 2015 at 7:29 pm

            Hi Menno,

            The example above is a specific set of years using a generic value metric and is meant to highlight that value can underperform over a 5+ year period of time, but over a full cycle it tends to beat the market. When we examine a data-mined period of terrible performance, the performance during this period is not going to be a “barn burner.” If we look over much longer data sets and/or examine higher conviction valuation metrics the spread would be larger. We, as well as many others, have done this sort of analysis in other research that has been published in books and journals.

            All that said, markets are extremely efficient, at the margin, so a poorly executed value strategy with high fees and tax-drag can make a positive 1.4% spread turn into a statistically significant loss! And there is certainly an argument that could be made that value is a ‘fake anomaly’, we don’t subscribe to that school of thought based on our own research and belief in behavioral finance, but the “value is data-mining” hypothesis is certainly a potential alternative hypothesis.

            Good luck.

          • Menno Dreischor January 29, 2015 at 1:59 am

            Dear Wesley,

            Thank you for your swift and extensive reply. Very interesting! We are also modelling the market, and although we do things differently, personally I also subscribe to your school of thought.

            Very best regards,

  2. Fabian October 31, 2014 at 3:27 am


    I read your book twice and liked it a lot!
    … and probably still missed a lot of explanations, so I have some questions.

    What is not so clear to me yet is, if the quantitative value approach is suitable for small and micro caps? It is clear for me that there is more trouble e.g. with the spread.

    As I understood the magic formula is investing in 30stocks each month with overlapping periods for a holding period of 1 year. Did you test the overlapping approach and the large amount of stocks for Quantitative Value as well? I assume that this could reduce the MAXDD!?

    Thanks in advance
    Best Regards

    • Wesley Gray, PhD October 31, 2014 at 7:58 am

      Hi Fabian,

      Thanks for the question.

      The QV approach is applicable across market caps and across equity markets. Of course, capacity is more limited when one plays in the small/micro world.

      To answer you question–Yes, we have tested this…in fact, we have tested just about anything and everything related to value investing at this point–including the various perturbations of portfolio construction/holdings/etc.

      In general we avoid large portfolios for diworsification reasons and because they enhance costs:

      When it comes to reducing drawdowns, one needs to overlay some sort of risk management system on top of the long-only stock selection bucket. The sad reality of ANY long-only equity system is that you can lose a large amount of capital. Period. I think the lowest drawdown I’ve ever seen on a long-only equity strategy is around 35% (need to have at least 30yrs of history).

      • Fabian October 31, 2014 at 6:24 pm

        Hello Wesley,

        thanks for the quick reply and the further information, which I will read in the next days.

        As a risk management system, do you mean something like adding momentum to the Quantitative Value approach (similar as the trendvalue strategy by o’shaughnessy or rather investing only in times when the dual-momentum approach by antonacci is active for equities)?
        Or maybe something like combining the Quantitative Value 60% approach with dual-momentum 30% and cash 10%.?

        • Wesley Gray, PhD November 1, 2014 at 10:45 am

          Sure, so you could do long-term trend following based off of S&P 500 or some sort of time series momentum rule like Antonacci suggests…main thing is to keep it simple and make sure you can maintain discipline to whatever risk mgmt concept you decide on.

          • Fabian November 1, 2014 at 3:46 pm

            Thanks. I just read your interview on abnormal returns, which has some momentum information as well. Very interesting! I will look closer into the QVAL etf description. Especially the tax advantages sound interesting, but I am not sure yet if this accounts for Germany as well.

          • Wesley Gray, PhD November 1, 2014 at 4:51 pm

            We should probably take any tax conversations offline. Feel free to email me. We can discuss in a private forum…

      • Dan February 19, 2015 at 10:41 pm

        Hi Wesley,

        Huge fan of the book and I’ve been thoroughly impressed with all of the extra information on this website. I also have a question regarding investing in smaller cap companies. You mention above that the strategy is viable across all market caps. If we expand our universe of stocks to include some mid/small cap companies then the top 10% of stocks value wise also becomes larger. Let’s say hypothetically our top decile now contains 160 stocks instead of the 80 stocks like in the example above. Now, investing in the top half (in terms of quality) of these stocks results in a portfolio of 80 stocks. For a retail investor this is less than ideal and as you mentioned in other posts it would be better to keep our portfolio between 30-40 stocks. How would you go about choosing which stocks to invest in? My instinct would be to use the top 5% of stocks value wise instead of the 10%. This would result in 80 stocks that could be further sorted into high vs low quality. The other option is of course just to invest in the top 40 stocks within the original top decile of 160 stocks. Any thoughts would be greatly appreciated!

        • Wesley Gray, PhD February 20, 2015 at 5:12 pm

          Hey Dan,

          It all comes down to the costs/benefits, as you mentioned.

          In general, buying the cheapest stuff in the market has been a good risk-adjusted bet. Which implies that loosening the constraints on your available universe might be a good idea (more opportunities to find really cheap stuff).


          As you include more illiquid securities you start dealing with serious costs. We believe the “size premium” is interesting, but limited

          You also need to consider your broader portfolio. If your entire equity portfolio is 40 stocks, it might be a good portfolio in expectation, but you’ll likely have a hard time sticking to the strategy when it inevitably endures heart-wrenching volatility and tracking-error relative to the benchmark. However, if this 40 stock portfolio is coupled with a few other systems and your overall equity allocation has some broader diversification, then concentrating in 40 names might make sense.

          Tough call to make. Best of luck!

          • Dan February 21, 2015 at 6:43 pm

            Thanks for the quick response and for the additional info!

          • Dan February 21, 2015 at 6:43 pm

            Thanks for the quick response and for the additional info!

      • Dan February 19, 2015 at 10:41 pm

        Hi Wesley,

        Huge fan of the book and I’ve been thoroughly impressed with all of the extra information on this website. I also have a question regarding investing in smaller cap companies. You mention above that the strategy is viable across all market caps. If we expand our universe of stocks to include some mid/small cap companies then the top 10% of stocks value wise also becomes larger. Let’s say hypothetically our top decile now contains 160 stocks instead of the 80 stocks like in the example above. Now, investing in the top half (in terms of quality) of these stocks results in a portfolio of 80 stocks. For a retail investor this is less than ideal and as you mentioned in other posts it would be better to keep our portfolio between 30-40 stocks. How would you go about choosing which stocks to invest in? My instinct would be to use the top 5% of stocks value wise instead of the 10%. This would result in 80 stocks that could be further sorted into high vs low quality. The other option is of course just to invest in the top 40 stocks within the original top decile of 160 stocks. Any thoughts would be greatly appreciated!

  3. dph May 4, 2015 at 3:00 am

    Wes, how much variance in returns do you see when you run these screens at different 10 year periods in time? Is this a process that can perform well even when starting from a high CAPE?

    Is there any screenable strategy that is relatively immune to the macro stock market valuation metrics?

    • dph May 4, 2015 at 4:31 pm

      Does the qualitative philosophy work well in international markets? Are those harder to backtest because of data quality and limited historical length?

      • Jack Vogel, PhD May 4, 2015 at 4:37 pm

        For international developed markets, you have shorter period to review (1991-2014), and need to change screens slightly due to data issues. However, the QV philosophy worked from 1991-2014.

  4. Wesley Gray, PhD May 4, 2015 at 8:43 am

    Over 10-year cycles you grind your 400-500bps over the index, fairly consistently–at least historically. Over shorter horizons, the results are much more noisy during But the absolute returns are tied directly to the general market. If markets are expensive, long-term returns tend to be lower, which means the long-term returns on any long-only strategy will be lower.

    I don’t know of any long-only strategy that makes you fully immune from overall macro stock market valuations. Buying the relatively cheapest stocks in the market can help, but isn’t full proof.

  5. Curt July 9, 2015 at 9:23 pm

    I enjoyed your “Quantitative Value” book.

    At the end of the book, it states that there is a companion website at It states that this website includes:

    – A screening tool to find stocks using the model in the book.
    – A tool designed to facilitate the implementation for a variety of tactical asset allocation models.
    – A back-testing tool that allows users to compare performance among competing investment strategies. redirects to Can you tell me how I can get the above?

    Your book explains that Excess Returns Revert to the Mean. Then how come the returns for so many of Buffett’s businesses do not? See’s Candy increased ROC over 40 years.

    Did you do backtests for Graham’s screeners, such as Defensive and Enterprising ( If so, what were the results?

    Your book shows a CAGR for F_SCORE of 11.29%. However, shows a CAGR of 27.8% (since inception) for Piotroski: High F-Score. Why are the results so different?

    In chapter 9, you explain that investors can follow high-performance institutional investment managers. Yet, in chapter 10, you explain how a fund manager’s past performance cannot predict future performance. Can you explain the contradiction?

    Buffett followed Phil Fisher’s strategy of holding stocks for decades. As you cited, this strategy worked extremely well with See’s Candy. Have you back-tested for this buy-and-hold strategy? Why do you recommend that stocks should be turned over every year?

    Glamour stocks are expensive after they’ve become popular with the masses. How do Phil Fisher and Warren Buffett find them before the masses? Is there a way to do this quantitatively and mechanically?

    • Wesley Gray, PhD July 9, 2015 at 9:28 pm

      Hi curt,

      You can head to our free tools:

      Here is a module on DIY investing and how to use the tool:

    • Wesley Gray, PhD August 27, 2015 at 6:51 pm

      Hi Curt, on the run here, but I’ll try and get you some quick and dirty answers.

      1. Returns on capital tend to mean-revert, on average. There are some firms with extremely strong moats, which are able to grind high ROC for a long-time. We look at 8-year historical ROC to systematically identify these sort of firms. Unfortunately, competition is powerful and strong moats eventually get attacked from all angles and erode over time.

      2. We have backtested a variety of Graham related screens, can’t remember off hand if we’ve looked at the specific one you mentioned. That said, here is a summary of a paper we wrote that looks at a few “Graham-esque” screens: As a general rule, Graham type strategies grind solid long-term returns, but come with hair-raising volatility–you need to be discipline and have a long-term horizon.

      3. We are highlighting the performance of the quality-only screen (i.e., not considering price paid). The F score strategy outlined on AAII starts off with cheap stocks first, then applies quality. As a general rule, the minute you play in the cheap stock playground, the higher your returns will end up being, on average. If you want to explore further, you can look at the paper referenced above, where we talk about the robustness of the AAII results…but I’ll spare you the punchline: the 27.8% cagr is way overstated and driven by micro-cap results, which are simply not believable. I traded micros/pennystocks for 10 yrs back in the day and my back of the envelope is the transaction costs are 5-10% round trip for any sort of size (ie 100k+). I would disregard any results that suggest one can earn compound returns of 25% over a long time period. As this post highlights, this is impossible:

      4. Sure, the early academic research shows that past performance cannot predict future performance (e.g., carhart 1997), on average. Subsequent research suggests that this empirical/theoretical observation doesn’t tell the complete story. In reality, the evidence that past winning managers don’t always end up being future winning managers is probably due to the fact that high performers get a lot more capital, and as they get more capital, they can’t perform as well. Here is an explanation of the logic: In the end, past performance is only predictive of future performance if there is a repeatable process in place that can maintain an edge over time. If the process sucks, or is ad-hoc, then the future is anyone’s guess. Here is a piece I wrote on sustainable active investing: Hopefully that will help frame the discussion a bit better than I can in a single comment.

      5. Holding stocks for long periods is great from a tax-efficiency stand-point, and clearly, if one has the ability to identify stocks that end up like See’s Candy–they should keep doing that. The problem is that finding See’s Candy type stocks is tough. The next best approach is to identify the core drivers of long-term outperformance, understand why these drivers of performance will continue in the future, and focus on a portfolio of firms with these characteristics. A few characteristics that seem to be effective include buying cheap, out-of-favor stocks, that show signs of high quality.

      Rebalancing is important to ensure that the portfolio is holding the basket of firms with the characteristics we desire. For example, if we own stock XYZ and it goes up 500% and has a P/E of 50, it is probably a good idea to sell that stock and buy ABC, which is selling at a P/E of 5. In the end, it really comes down to the evidence. And the evidence suggests that more frequent rebalancing of a portfolio of cheap stocks is better than less frequent rebalancing. One has to always weigh the expected benefits (higher performance) against the costs (higher transaction costs), but it seems that the sweet spot on value is in the quarter-to-annual rebalance range.

      6. I wish I knew! We are more Graham-focused value investors–buying cheap with margin of safety. A win for us is buying a stock at 5 and watching it go to 10. We’re looking for singles/doubles, not home-runs. We simply don’t have the capability, or confidence, that we can systematically find stocks at a P/E of 5 that go to a P/E of 50. I’ll leave that to Warren Buffett–he’s a lot smarter than me.

  6. Piotr Arendarski January 31, 2016 at 4:22 pm

    Is Value is sorted with regards to sector/industry ?
    Otherwise you are left with portfolio focused on long-term undevalued sectors.

    • Wesley R. Gray, PhD January 31, 2016 at 4:44 pm

      Great question.

      No. And yes, we mechanically tend to take sector bets via the bottom’s up security selection system. This is by design and we have tested this from many different angles. If one sector neutralizes, the tracking error goes down and you become more “closet index,” but this means you have less long-term edge and more reasons to buy the Vanguard Fund instead of an active value strategy. Perhaps a sector-weighted version would be a fine institutional strategy for someone more concerned with tracking error risks (i.e., clients are short-term benchmark focused). However, in a broader diversified portfolio context, or a portfolio of truly active value names pooled with active momentum, industry tilts generally wash out, and the investor is able to capture a higher expected value premium.

  7. Piotr Arendarski January 31, 2016 at 4:36 pm

    I like the idea of Forensic Accounting Screens. This looks to add value to my models. Thanks!

    However, the issue pertaining rebalancing does not convice me.
    When you use quant equity approach, you do not trade stocks, you trade groups (sets) of stocks.
    You should analize the covariance between stocks’ fundamentals within the one group (long or short),
    Assume, that you trade large group of stocks (800 on one side as AQR or 400 as Gotham). If PE of stock X increased to 100, there should be co-movement form the side of other stocks.

    I would rather modify the sets when there is a significant change in PE of each set (long or short). This makes more snse to me when I trade sets not single stocks.

  8. Varun Sahay April 7, 2016 at 9:27 am

    Dr Gray, Wow, incredible stuff. 10 year investing horizon, buying the cheapest high quality value stocks today based on 8 years historical data having a market cap over 2 billion USD from a playing field of 800 US companies from a grand total of approximately 4000 companies. If this is 10 year holding period includes no re-balancing then what do you do when you are invested? What happens when a 7 year old bull market comes to an end? These cheap stocks probably have a broken leg hence the cheap short term price, when the market turns dont they turn too? When do you take profits or rebalance your portfolio. Past performance is no indicator of future performance. Graham, Buffet bought when the world was not flat and those companies moats are drying up today. Take American Express or coca cola for example. In todays world 10 year is almost two cycles from peak to trough excluding this bull market. So even if one was to buy today and this bull market ended would it not be an anti cycle and contrarian investment- everything against the investment. Take CAT, IBM, exxon, Freeport, AIG, Transocean, Tenet Healthcare all these would make a cut on your process?

    Thanks for the interesting insight and look forward to your comments.

    • Wesley Gray, PhD April 7, 2016 at 10:51 am

      There is annual rebalancing in all the results mentioned. In practice — assuming there is a tax efficient way to facilitate — you want to get more frequent rebalancing so the portfolio is always holding the cheapest highest quality stocks.

  9. G-Man November 18, 2016 at 6:06 am

    Please correct me if I am wrong but In the book “Quantitative Value” you identify EBIT/TEV as the optimal valuation metric, however, above TEV/EBITDA is identified as the best valuation metric – why the difference (between EBIT and EBITDA) and which one has performed better historically?
    Thanks a lot,

    • Wesley Gray, PhD November 18, 2016 at 8:50 am

      They are very similar. At the margin, EBIT/TEV is arguably more effective.
      Bottomline: all value metrics “work,” but you need to have iron will discipline and hold your nose sometimes

  10. yowie89 January 2, 2017 at 12:36 am

    Hello Wesley, I wondering if i have missed the TTM numbers for the quantitative screening. Will the strategy work ? Have you tested it before ? Because supposedly an investor does enter the market towards to year end and most companies have not done their filling yet until a couple month before year end. Please advise.

  11. Sharat February 25, 2017 at 10:11 pm

    Hello Dr Gray,

    I recently discovered your blog and I must say, I am hooked. So many articles of learning for the lay person and I really appreciate you taking the effort to explain these things for people like me. Right now, I am just going through all the blogs one at a time :-).

    As an individual who is try to do things himself, one query I wanted to ask was there a period, in your back testing (similar to the Graham portfolio) where in you ended up with a value portfolio of zero or say very very less number of stocks? Or do you always end up with investing in say 40 stocks that you seem to indicate. My guess is, during various period of bull market, the portfolio size (in terms of number of stocks) would have shrunk. In essence you are willing to let go of strict adherence to F scores or FS scores as long as price is OK.

    Thanks and Regards,

  12. some guy May 11, 2017 at 11:11 am

    Any thoughts on the recent Kok, Ribando and Sloan paper “Facts about Formulaic Value Investing” that says quant value methods “systematically identify companies with temporarily inflated accounting numbers”?

  13. Mak Sherman
    Mak Sherman April 12, 2018 at 1:14 am

    Hello all,

    Thank you for writing down and thoroughly presenting your thoughts in your book on “quantitative value”. I appreciate the style in which it was written as an homage to Ben Graham, who constantly reminded the readers to reflect on the various cognitive biases that we have as investors.

    In addition, thank you for succinctly condensing and re-presenting many of the key principles and thoughts of esteemed ‘legends’ and academia.

    Nonetheless, I certainly have a few questions after digesting the tome that I hope to generate some meaningful discussion:

    1. I would like to ask if there is a way to generate or replicate the Ed Thorp’s strategy of pricing warrants probabilistically as described in Chapter 1. What sort of statistical or computing methods would I need to efficiently work through his approach? Do you know of any good resource(s) to recommend?

    2. Is it possible to improve upon the FS_SCORE by transforming the ‘Stability’ measures and the ‘Recent Operational Improvements’ measures into “Sustained Stability” or “Sustained Operational Improvements” by dividing the average ratios with their standard deviation as was described in pg 107 of the book? This is because the FS_SCORE seems to act like a sort of ‘momentum’ indicator for current financial health of the companies examined rather than a sort of long-term financial health indicator.

    3. Is there an easy way to extract corroborative signal data from the EDGAR or relevant databases? Were the buyback signals incorporated into the back-testing? As I noticed that the buy-back signals were not seen in the final checklist.

    4. While this was certainly not a book on valuation, I would like to know whether these hedonic or logistical (if I am not wrong) principles can be applied into the forecasting of revenues or cost-margins in the art of valuation.

    For example, to estimate the EBIT of Coca-Cola, it might be useful to look at sugar-cane prices, aluminium prices or plastic (polyethylene) prices for instance to arrive at an accurate cost estimate for the various operations of the Coca Cola corporation. This is done so that a more accurate valuation, based on earnings or cash flows, of Coca-Cola can be estimated.

    Finally, in “Graham’s Simple Rules”, Graham recommended having a definite criterion for selling. While Graham recommended selling the stock when it has returned 50% and after 2 years if it had not achieved that level, my interpretation of his idea, therefore instead, is to sell a stock when it reaches its intrinsic estimated value.
    I am curious if such a selling signal was incorporated into a back-test when the valuations are analysed and arrived based on the above-mentioned ‘quant’ method, would it improve a portfolio overall results?
    to go one step further, perhaps thereby creating a “valuation index” as a sort of selling signal?

    Would appreciate any thoughts and insights for the points above.


    • April 13, 2018 at 6:10 pm

      1. Not sure. But Google is a good place to start!
      2. Possibly, but my guess would be ~0 effect on the final results.
      3. We do not include this in the analysis or our live systems. Here are a good post on how to optimize the signal if you were going to incorporate it into your systems: Also should have some info on where the data is accessible.
      4. In our system the buy/sell rules are built into the model. If something is cheap/quality at the rebalance period, we buy/hold it. If it is no longer cheap/quality, we sell it. The price appreciation doesn’t matter (at least as a direct measurement).

Leave A Comment