“Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.”
Louis Brandeis, Other People's Money and How the Bankers Use It, 1914.
Disclosure is a fundamental pillar of our market-based financial system. When information is accurate and complete, asset prices can reflect both expected return and risk. Without effective disclosure, how could anyone make good savings and investment decisions?
Having information is one thing; using it appropriately is something else entirely. To evaluate the relative merit of a large number of potential investments, most people (including us) rely on specialists to do the monitoring: Independent auditors vouch for the accuracy of financial statements. Credit rating agencies tell us about the riskiness of bonds. Various brokers and specialized firms rate equities. And, for mutual funds, there are several monitors, of which Morningstar is the most prominent.
But, when the specialists fail to do their jobs, disaster can strike. Examples abound: auditors failed in the case of Enron; equity analysts overvalued technology firms during the dotcom boom; and rating agencies’ inflated assessments of structured debt contributed substantially to the financial crisis of a decade ago (see here). So, there is cause for concern anytime we see evidence that key monitors are falling short.
This brings us to the recent work of Chen, Cohen and Gurun (CCG) on Morningstar’s classification of bond mutual funds. They argue that mutual fund managers are providing inaccurate reports, and that Morningstar is taking them at their word when better information from standard disclosures is readily available. In this post, we describe CCG’s forensic analysis, but we don’t need to postpone our conclusion: if we can’t trust the monitors, then markets will not function properly.
At the end of 2018, there were more than 8,000 U.S.-registered mutual funds with nearly $18 trillion in assets. Households own the bulk of this―about 80%―either directly or through pension accounts. Roughly speaking, we can divide the funds into three categories: equity, bond and money market funds. Open-ended taxable bond funds, the focus of CCG’s analysis, account for roughly 20% of the total (both in number of funds and assets under management), so accurate information regarding the risk and returns of these funds is a pretty big deal.
As for the ratings, Morningstar ranks mutual funds on a scale of one to five stars. Rankings are performance based, adjusting for risk and costs, relative to funds in the same category. Evans and Sun show that these ratings influence investors. In 2002, Morningstar’s methodology changed in a way that shuffled many funds’ peer groups. A fund that was in the top 20% prior to 2002 might be in the next quintile after. Nothing about funds’ underlying characteristics changed, but Morningstar’s ratings did; and so did flows into and out of the funds. In other words, investors care about the stars! (Hartzmark and Sussman demonstrate that sustainability ratings—in addition to stars, Morningstar awards one to five earths—also drive fund flows.)
As for the details of the CCG analysis, each bond mutual fund reports its holdings to Morningstar directly. According to CCG, Morningstar then uses these to place bond funds into one of three credit quality categories: high, medium and low. These correspond to the ratings of the fund’s primary holdings: high is AAA and AA, medium is A and BBB, and low is BB and B. Funds in each category are then compared to their peers and awarded stars.
Yet, there is another source for the same information about the credit quality of fund holdings: quarterly reporting to the SEC. Each fund files a detailed public disclosure of its holdings. Because the discovery of a misleading regulatory filing can bring significant penalties, funds have a strong incentive for their SEC disclosures to be accurate.
CCG ask whether the data published by Morningstar as the basis for their calculations conforms to that in the SEC filings. Their answer is no! Using information from the start of 2017 through mid-2019, CCG construct a sample of 3,042 quarterly reports. For each of these, they are able to compute the fraction of investment grade bonds reported to Morningstar and compare it to the SEC filing. The following figure plots the frequency distribution of the difference. The striking result is that, in three quarters of the cases, there is an overstatement in the report to Morningstar relative to the regulatory filing. And, the overstatement goes as high as 15 percentage points.
Frequency distribution of percentage-point gap between reported and calculated shares of investment grade holdings, 2017-2Q 2019
Since Morningstar supposedly uses this information only to classify the funds, it matters only if the overstatement leads to a change in category. To see the impact, CCG use Morningstar’s published methodology to classify funds both based on the data funds report and on the regulatory filings. They find that 32 percent of cases in 2018 (432 out of 1,448 fund-quarter observations for which they have information) are classified in a higher credit quality category. To be clear, the information that CCG have is publicly available. However, according to CCG, Morningstar does not use it for categorization.
The fund misclassification has a variety of important consequences: First, funds that overstate their investment grade bond holdings have returns that are higher relative to lower-risk funds in their category that are properly classified. Second, because these funds appear to perform better, they have higher star ratings: on average, misstatement leads to an increase of from one-fifth to one-third of a star. Third, the higher star ratings, in turn, lead to investor inflows. Fourth, the riskier funds’ expense ratios are higher, suggesting that fund managers are compensating themselves for their (specious) outperformance. (These results are consistent with Choi and Kronlund's conclusion that bond funds generally reach for yield in an effort to attract inflows.)
We should note that Morningstar disputes these findings. A recent story on the website MarketWatch reports that:
For its part, Morningstar said the report’s authors misunderstood Morningstar’s proprietary methodologies and used some of the firm’s data “incorrectly to make sweeping conclusions,” in a statement to MarketWatch. “We stand by the accuracy of our data and analytics, and we are reaching out to the authors with an offer to help understand the data they used and to clarify the Morningstar methodologies we employ.”
To continue, the CCG finding is in line with other evidence that mutual funds might misrepresent their holdings. Earlier this year, Lettau, Ludvigson and Mandel (LLM) analyzed a broad set of over 2,500 actively managed funds, of which 20% identified themselves as “value funds” and 40% as “growth funds.” However, LLM show that virtually all of the self-reported “value” funds were essentially growth funds. Only one of the value funds actually held stocks with low price-to-book ratios.
When we take a small step back, it is easy to understand the temptation for actively managed funds to raise their returns through concealed risk-taking. After all, passively managed index funds outperform managed funds over most horizons. The following figure shows S&P’s tabulation of the fraction of mutual funds that underperformed their associated benchmark index over the 10 years through 2018. The table also shows the cumulative under- and over-performance of these funds. So, for example, 85% of large-cap equity funds underperformed the S&P 500, and the cumulative underperformance was 7.9%. (Investment grade bond funds appear to have outperformed the Barclays U.S. Government/Credit Intermediate Index. But, given the CCG results, one has to wonder if the funds are properly classified.)
Fraction of managed funds that underperform their benchmark index over 10 years ending 2018
What should we do about all of this? First, continue to support the forensic analysis of researchers like Chen, Cohen and Gurun. They follow in the footsteps of many others who have looked for evidence of market manipulation. Examples include Christie and Schultz’s discovery 25 years ago that, to benefit from larger bid-ask spreads, NASDAQ dealers were only quoting prices in even eighths; and Griffin and Shams’s recent finding of manipulation in the Bitcoin market. So long as there is accurate disclosure of information, this research can continue to shine a spotlight on financial markets.
In finance, since the stakes are so high, there is always a strong incentive to mislead. Disclosure and monitoring provide the most effective solution, but only if it is trustworthy. As Louis Brandeis wrote in 1914, two years prior to becoming a Supreme Court Justice, “[T]he disclosure must be real.” While we applaud researchers who put their time and energy into the painstaking work necessary to establish misconduct, they should not be the front-line of defense. This means encouraging competition among monitors. It also means more official sector oversight and penalties for misleading reports. In this case, in order to head off what could become a costly headache, Morningstar may wish to request that the SEC (and other relevant authorities) audit their data and processes from now on.
Acknowledgements: We thank Huaizhi Chen for providing data and answering our questions, Blake LeBaron for his suggestions, and Larry White for his comments.