Stop Using the Altman-Z Score.

////Stop Using the Altman-Z Score.

Stop Using the Altman-Z Score.

By | 2017-08-28T07:28:20+00:00 July 23rd, 2011|Factor Investing, Research Insights|28 Comments

Ed Altman is simply a legend.

Everyone in finance is familiar with the Altman Z-Score and many analysts use it religiously–despite the fact there has been 50 years worth of research suggesting there are better methods. The Altman Z-Score kinda reminds me of the CAPM: the CAPM has been shown to not be very effective, and yet, everyone still uses it?

Nonetheless, Prof. Altman single handily created an empire based on his original 1968 book/research on predicting corporate bankruptcy.

Prof. Altman went ahead and outdid himself with his new ZETA score:

The point of this post is to highlight some recent research that concocts a model that predicts better than the Altman-Z AND is relatively straight forward to implement (at least relative to the ZETA score, which is proprietary and involves a lot of leg work to reverse engineer).

Before we begin, here are some recent models showing that Altman-Z is dead:

  • Shumway (2001)
    • Shows that half of the variables included in Altman-Z are no longer predictive of bankruptcy–ouch.
    • He also identifies a model that trumps the Altman-Z in out of sample tests–double ouch.
  • Chava and Jarrow (2004)
    • Verify with expanded data that Altman’s model sucks vs. Shumway’s model.
    • Determine that controlling for industry effects can significantly improve bankruptcy prediction models.

What does this paper add to the party?

The authors devise a model that ends up improving the forecast accuracy by 16% compared to Shumway and Chava and Jarrow’s models (which in turns mean it is way better than Altman-Z).

Here is how they set up their prediction variables:

  • profitability measure=weighted average (quarter’s Net income/MTA)(1)
    • MTA=Market value of total assets=book value of liabilities +market cap.
  • leverage measure=Total liabilities/MTA
  • short-term liquidity=cash & equivalents/MTA
  • Recent relative performance=weighted average(log(1+stock’s return)-log(1+S&P 500 return)
  • Recent volatility=annualized stock’s standard deviation over the previous 3 months
  • Relative size=log(stock market cap / S&P 500 total market value)
  • Overvaluation factor=MTA/adjusted book value, where adjusted book value=book value+.1*(market cap-book value)
  • Stock price level=Log(recent stock price), capped at $15, so a firm with a stock price of $20, would be given a value of log(15) instead of log(20).
Here is the “explanation” from the text:

What the authors meant to say is that their weighted average is simply calculated as follows:
.5333* t+.2666*t-1+.1333*t-2+.0666*t-3.

Here is a table comparing the stats on the various measures above for the universe of firms and for firms that have failed:

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

It certainly seems that the variables are capturing something regarding failed firms!

So how do we go about predicting bankruptcy/failure? Behold the magical distress formula:

After running some logistic regressions the authors determine that their model beats the best alternatives out there over a variety of time periods:

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Here are the results from the logistic regression:

If you forgot to get a PhD in some quantitative field and you are not sure how to use these regression results, here is a quick background on logistic regressions:

  • Calculate the following:
    • Logit=-8.87-.09*PRICE+MB*.07-.005*RSIZE+1.55*SIGMA-7.88*EXRETAVG-2.27*CASHMTA+1.6*TLMTA-20.12*NIMTAAVG
    • P=probability of failure=1/(1+exp-(logit))

Once you have calculated your P’s you are ready to predict which firms are likely to go bankrupt. So how might one use this information? Well, if you could predict which firms are going bankrupt–assuming the market hasn’t already priced in a bankruptcy–you might be able to make some money. Let’s see.

Here are results of sorting portfolios based on “P”  every January and holding stocks for one year.

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Pretty wild, eh? Stocks with low probabilities of failure do well, but from a statistical standpoint, it is hard to say if there is any alpha. However, check out the high probability of default firms–wow–negative alphas galore!

We’re getting closer to affording our gold-plated Mercedes!

Investment Strategy

Strategy 1:

  1. Stop using Altman-Z to predict bankruptcy

Strategy 2:

  1. Calculate logits using the estimates from table 2
  2. Calculate the probability (P from above) of bankruptcy using the logit figures calculated in 1.
  3. Go long low P firms, and WAY short high P firms.
  4. Make money.

To juice up strategy 2:

  1. Focus on distressed firms in the extreme growth and value categories (see below)

The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index. Additional information regarding the construction of these results is available upon request.

Commentary:

I’ve been playing with the ‘financial quality’ variables for one of our in-house ‘value factor’ models recently (every quant shop has to have one, right?). I’ll admit it, we were using Altman-Z as a component of that variable, but after doing some research I’m sh&$canning Altman-Z and going test this model out instead. No disrespect to Prof. Altman, but when the evidence shows something works better, why not use it?


Predicting Financial Distress and the Performance of Distressed Stocks

  • John Y. Campbell, Jens Hilscher, and Jan Szilagyi
  • A version of the paper can be found here. (Here is the Journal of Finance version)

Abstract

In this paper we consider the measurement and pricing of distress risk. We present a model of corporate failure in which accounting and market-based measures forecast the likelihood of future financial distress. Our best model is more accurate than leading alternative measures of corporate failure risk. We then use our measure of financial distress to examine the performance of distressed stocks from 1981 to 2008.We find that distressed stocks have highly variable returns and high market betas and that they tend to underperform safe stocks by more at times of high market volatility and risk aversion. However, investors in distressed stocks have not been rewarded for bearing these risks. Instead, distressed stocks have had very low returns, both relative to the market and after adjusting for their high risk. The underperformance of distressed stocks is present in all size and value quintiles. It is lower for stocks with low analyst coverage and institutional holdings, which suggests that information or arbitrage-related frictions may be partly responsible for the underperformance of distressed stocks.

Data Sources

The study covers the 1963-2008 time period. Data come from Compustat and CRSP. Data on failures come from Kamakura Risk Information Services.


  • The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. Our full disclosures are available here. Definitions of common statistics used in our analysis are available here (towards the bottom).
  • Join thousands of other readers and subscribe to our blog.
  • This site provides NO information on our value ETFs or our momentum ETFs. Please refer to this site.

Print Friendly, PDF & Email

References   [ + ]

1. weighted average is represented by some silly a&* equation meant to create confusion for the layman.

About the Author:

After serving as a Captain in the United States Marine Corps, Dr. Gray earned a PhD, and worked as a finance professor at Drexel University. Dr. Gray’s interest in bridging the research gap between academia and industry led him to found Alpha Architect, an asset management that delivers affordable active exposures for tax-sensitive investors. Dr. Gray has published four books and a number of academic articles. Wes is a regular contributor to multiple industry outlets, to include the following: Wall Street Journal, Forbes, ETF.com, and the CFA Institute. Dr. Gray earned an MBA and a PhD in finance from the University of Chicago and graduated magna cum laude with a BS from The Wharton School of the University of Pennsylvania.

28 Comments

  1. […] Stop using the Altman-Z score to predict distress.  (Empirical Finance Blog) […]

  2. shawn July 27, 2011 at 3:20 am

    I loved your explanation that “weighted average is represented by some silly a&* equation meant to create confusion for the layman”. Hear hear. May more researchers straight talk like you do. In particular, may more abstracts take the form of your succinct “Make Money.” summaries.

  3. […] Predicting Financial Distress and the Performance of Distressed Stocks – via EmpiricalFinance- “In this paper we consider the measurement and pricing of distress risk. We present a model of corporate failure in which accounting and market-based measures forecast the likelihood of future financial distress. Our best model is more accurate than leading alternative measures of corporate failure risk. We then use our measure of financial distress to examine the performance of distressed stocks from 1981 to 2008.We find that distressed stocks have highly variable returns and high market betas and that they tend to underperform safe stocks by more at times of high market volatility and risk aversion. However, investors in distressed stocks have not been rewarded for bearing these risks. Instead, distressed stocks have had very low returns, both relative to the market and after adjusting for their high risk. The underperformance of distressed stocks is present in all size and value quintiles. It is lower for stocks with low analyst coverage and institutional holdings, which suggests that information or arbitrage-related frictions may be partly responsible for the underperformance of distressed stocks.” […]

  4. […] Predicting Financial Distress and the Performance of Distressed Stocks – via EmpiricalFinance- “In this paper we consider the measurement and pricing of distress risk. We present a model of corporate failure in which accounting and market-based measures forecast the likelihood of future financial distress. Our best model is more accurate than leading alternative measures of corporate failure risk. We then use our measure of financial distress to examine the performance of distressed stocks from 1981 to 2008.We find that distressed stocks have highly variable returns and high market betas and that they tend to underperform safe stocks by more at times of high market volatility and risk aversion. However, investors in distressed stocks have not been rewarded for bearing these risks. Instead, distressed stocks have had very low returns, both relative to the market and after adjusting for their high risk. The underperformance of distressed stocks is present in all size and value quintiles. It is lower for stocks with low analyst coverage and institutional holdings, which suggests that information or arbitrage-related frictions may be partly responsible for the underperformance of distressed stocks.” […]

  5. […] Using the Altman-Z Score. Posted on July 23, 2011 by wes Predicting Financial Distress and the Performance of Distressed […]

  6. […] with the Campbell Hilscher and Szilagyi distress model–we even highlighted it on the blog at http://turnkeyanalyst.com/2011/07/stop-using-altman-z-score/. The punchline was that stocks identified by the distress model were in a serious hurt locker […]

  7. […] Here is a post on the subject: http://turnkeyanalyst.com/2011/07/stop-using-altman-z-score/ […]

  8. G-Man August 6, 2016 at 11:37 am

    Great article, thank you!

    To clarify, in relation to this comment – “when the evidence shows something works better, why not use it???”
    Is that to say the Z-score still works to an extent? I ask because as a retail investor the Z-score is more readily available from various financial sites.

    What are your thoughts on using the more humble debt to equity ratio combined with the current and quick ratio to determine if a company is at risk of bankruptcy? These tend to be easily available/calculated.

    Thanks,
    G

    • Wesley Gray, PhD
      Wesley Gray, PhD August 6, 2016 at 9:06 pm

      Hey G,
      Thanks for the question.
      I imagine it works better than doing nothing at all…but there doesn’t seem to be a lot of recent evidence suggesting it is highly effective. Of course, I’m not sure any of these bankruptcy/distress prediction algos are ever that accurate.
      Good luck!
      Wes

  9. Josh Wallner August 1, 2017 at 3:23 am

    has this formula been incorporated into any stock scanners?

  10. Moncho Martínez August 23, 2017 at 8:07 pm

    Hello,

    first of all, congrats for your great blog, it´s full of interesting stuff.

    I´d like to double check the logit formula, because I got very strange results, so there is a typo somewhere or I am totally lost.

    The problem is that constant in logit formula -8.87 is too big, so it´s almost impossible to get a positive value for logit and as consequence P value is always lower than 1%. I tried with companies that went broke and did not get a logit higher than -5.

    Is it possible that -8,87 is a typo and the actual value is -0,87?

    Kind regards.

  11. G-Man September 11, 2017 at 1:41 pm

    Thanks so much for this great post!

    1) I note the date is July 23rd, 2011; was it updated more recently? I ask because I recall (maybe incorrectly) reading a post with this title but without e.g. a specific explanation as to how one could implement it as an individual. I’m sure I asked how well a simple Debt/Equity ratio works in the comments too… (I could be having weird dreams though…)

    2) Why are there two versions of the one paper? Is that the norm? Are there any differences in the content? (I read the one Published in Jan 2010 (yeah, all of it!)) which contains the tables corresponding with the extracts in your post.

    3) To apply this model to a non US stock which index would it be advisable to track the market volatility? An all world market index or the local market index?

    Thanks again for taking such esoteric content and making it more accessible to everyday people!
    Kind regards,

    • Jack Vogel, PhD September 25, 2017 at 7:42 am

      1) I am not aware of a data update. Since this model will yield a lot of “false positives” one can use this model as a tool to eliminate the bottom x% of a universe.
      2) Yes, there is generally the working version of the paper and then the published version; this is the norm.
      3) The stock volatility in this paper is at the individual stock level, not the index level.

  12. Matt February 15, 2018 at 5:32 am

    Hello. I am trying to work out how I’m going wrong here. I am doing this primitively in Excel. I have used Apple (AAPL) numbers. Some of these might be a bit off at this point as they’re out of date/I’m been tweaking them a bit. But the result is fundamentally the same: near 100% probability of corporate failure (99.999%).

    Here are my inputs into the equation:

    NIMTA TLMTA EXRET RSIZE SIGMA CASHMTA MB PRICE
    AAPL 0.014 0.236 0.393 -1.444 0.366 0.252 5.317 1.176

    The weird thing is, if I make things *worse*, I get a *better* (i.e. lower) probability of failure. For example, if I increase NIMTA from 0.014 to a large 0.2, I get 100% chance of failure (versus 99.999%).

    It’s like the true result is the inverse of my result (1 – x, where x is the current output probability).

    FYI: My answer using the above inputs is around -11 or -12 (the output logit probability) which I’m then using 1/1(+EXP(LPFD)) to find the percentage probability.

    The values shown in the paper are as follows (for reference). To be honest I can’t seem to spot any big differences from my AAPL example that can’t be “explained away” reasonably. Since they’re all ratios I don’t think there are any “scaling errors”.

    paper NIMTA TLMTA EXRET RSIZE SIGMA CASHMTA MB PRICE

    mean 0.000 0.445 -0.011 -10.456 0.562 0.084 2.041 2.019
    median 0.006 0.427 -0.009 -10.570 0.471 0.045 1.557 2.474

    • Wes Gray February 16, 2018 at 8:20 am

      This is from a few weeks ago…6% probability.
      NIMTAAVG TLMTA CASHMTA EXRETAVG SIGMA RSIZE MB PRICE PFA
      0.011494161 0.229423947 0.168981824 -0.190851433 0.192513981 0.626848604 5.214428525 1.176091259 0.06%

      • Matt February 16, 2018 at 1:12 pm

        I appreciate the feedback. I recreated the same formula and used your inputs but got the same result. Here is an open spreadsheet with the 99%+ probability of failure once again:

        https://docs.google.com/spreadsheets/d/1OYY24NvtjFNaZjDu_MC9PETgOcpOqNVWrj8oiOVWR_A/edit?usp=sharing

        There must be something fundamentally wrong with how I’m implementing the formula… It’s even weirder that both “good” companies and “bad” companies get similar results.

        • Wes Gray February 16, 2018 at 2:59 pm

          heading out, but if you are doing hte calcs right you should get around 7%

          • Matt February 16, 2018 at 5:30 pm

            Well thanks but I just tried again by literally copying your formula and pasting it into the spreadsheet, and then switching out the abbreviations for the relevant inputs (PRICE, MB, etc.), and I get the same result. Makes no sense to me… I’m no math guy either so I really have no idea.

            Same numbers as yours.

            Same formula as yours.

            Different result!

          • Matt February 17, 2018 at 10:43 am

            Your “PFA” says 0.06% — so was it 0.06% or 6%? I am able to get 0.06% in Excel now, while the logit is -7.50 using your data. Thanks…

      • Shaheen Iqubal April 2, 2018 at 10:54 am

        I was introduced to your blogbost last week and I am really enjoying it. I am replicating this paper and found your blog very helpful. To do the sanity test, I was looking at the factor value for AAPL you posted and have following two questions:

        1. My initial intuition was that RSIZE value will always be negative because the ratio of stock market vale/S&P market value will always be between 0 – 1, How did you get 0.628 for RSIZE.

        2. I noticed that you are using log instead of natural log (ln) for price factor. Is there a reason?

        Thanks for your help!

      • Shaheen Iqubal April 2, 2018 at 4:58 pm

        RSIZE = log(market value of firm/total market value of S&P 500). The ratio should always be less than 1 and therefore the log of the ratio should be negative. How do you get positive value of 0.626849 for RSIZE factor?

        • Wes Gray April 2, 2018 at 5:42 pm

          you are correct. calc issue.

          • Shaheen Iqubal April 3, 2018 at 12:01 am

            Thanks for your prompt response! I also noticed from the value you posted for apple that you are not using log instead of ln in your calculation. I noticed that on the log(price). can you please confirm if it is correct?

            Like Matt, I pasted your values into an excel spreadsheet and could not replicate the calculation. I did not get a 6% probability. I will really appreciate if you can show us what we are missing. I got the same number as Matt where the value for logit was -7.05

            Appreciate your help and really enjoying your blog!

  13. Matt February 16, 2018 at 6:13 pm

    See my image here which is a summary of the problem: https://ibb.co/mHhiqn

    Sorry to be a bother but the image in a straightforward fashion shows your formula, your numbers. The logit value before getting the probability, is -7.059439442 (as shown in the screenshot).

Leave A Comment