When Comparing Apples-To-Apples Whom Should You Trust’
Hard markets have a way of making insurance company financial strength ratings a hot topic of discussion, and not just among insurance agents and brokers. Professional risk managers and other insurance buyers, consumer advocates, regulators and the media all get into the act. Insurance agents and brokers often find themselves taking some heat if they have placed coverage with carriers who find themselves in financial difficulty. Agents and brokers, the public reasons, are professionals and professionals ought to be able to see trouble coming down the track long before it arrives.
The principal tool that insurance professionals use to forecast insurer insolvency is a financial strength rating. Not so long ago, there was only one place to turn for financial strength ratings on insurance companies. A.M. Best Company Inc. held a monopoly in the United States and there was no comparable service available for the rest of the world.
For better or worse?
Today things have improved. Or have they? Agents and brokers now have a choice of five sources for insurer financial strength ratings when evaluating large national and/or global insurers. For more regional-focused property and casualty insurer ratings, agents may turn to companies such as Ohio-based Demotech, which specializes in financial analysis and financial strength ratings for smaller niche carriers. That sounds as though it ought to be an improvement, but the proliferation of financial strength ratings can also be something of a two-edged sword.
The availability or ratings from multiple sources ought to exert upward pressure on the quality of the ratings and their ability to predict which insurers will encounter financial difficulties in the future. That is how competition usually works. Ratings from multiple sources also make second opinions available. Agents and brokers can feel more secure placing coverage with insurers who hold secure ratings from two (or more) rating agencies.
The problems that this cornucopia of ratings presents begin with the number of rating agencies. Instead of going to one place that everybody agrees is the authoritative source, agents and brokers now have to seek out multiple ratings. Finding out what each of the rating agencies has to say about a particular company would mean checking separate sources. That means investing more time and effort with no corresponding increase in either the perceived value of the services or revenue.
Fortunately, or unfortunately (depending on your point of view), not all insurance companies hold ratings of any kind from all rating agencies. That allows agents and brokers to limit their search by disregarding those rating agencies that rate the fewest insurers, but it also creates the problem of evaluating an insurer that is unrated by one or more agencies. Is this an indication that the company is on shaky financial ground or has something to hide that taking part in the rating process might disclose?
Ratings diversity is the other problem. How should agents and brokers respond when different agencies assign different ratings to the same insurer? Producers need to decide how they will respond if they find significant differences in the ratings assigned by different rating agencies to another.
Despite the fact that some people will tell you that A.M. Best has been rating insurance companies for over a 100 years, the rating process actually began in 1906. That was the year of the great San Francisco earthquake and fire, and the resulting failure of large numbers of insurers prompted customers to suggest to Alfred M. Best that he undertake the project of rating insurance companies on their ability to weather the storm. Life insurance company ratings followed in 1928, once again in response to events that raised concern about the ability of insurers to continue their operations.
Debt ratings followed close on the heels of what have come down to us as insurer financial strength ratings. John Moody started rating railroad bonds in 1909, followed by Poor’s Publishing Company in 1916, Standard Statistics Company in 1922 and Fitch Publishing Company in 1924. Standard Statistics merged with Poor’s Publishing in 1941 to produce Standard & Poor’s.
For more than 75 years debt ratings and insurer financial strength ratings remained two distinct worlds. Insurance companies did not as a rule issue publicly traded debt, so they had no need for debt ratings and debt rating agencies tended to ignore them. Worldwide at the end of 2002 about 600 life insurers and 1,300 property/casualty insurers held debt ratings. Although the raw numbers might sound impressive, less than 25 percent of insurance companies throughout the world have debt ratings.
Things started to change in the 1980s. Debt rating agencies began to issue insurer financial strength ratings, and a new player entered the scene. It started during the 1970s when Moody’s and S&P began to publish debt ratings for insurers. Duff & Phelps (later absorbed by Fitch) began rating insurer debt in the 1980s. It was a small step from rating debt to developing financial strength ratings, and debt-rating agencies took it in the 1980s. In 1989, Weiss Ratings joined the fray with no prior experience in the field. Also in 1989, Demotech developed its analysis model to provide financial strength ratings for smaller niche property/casualty insurers.
A.M. Best remains the dominant player in the financial strength ratings game. This March a Swiss Re Sigma report estimated Best’s share of the worldwide market at 44 percent, compared to 30 percent for S&P, 17 percent for Moody’s and 7 percent for Fitch. Weiss’ and Demotech’s market share is in the 2 percent that other rating agencies share. A.M. Best is especially dominant in the United States, while S&P provides greater coverage of European insurers.
The relative positions of the players are not surprising when you consider the scope of their activities. At the end of 2002 A.M. Best rated 2,246 domestic property/casualty insurers, representing the closest thing to the entire marketplace you can find. Only Weiss ratings provides similar numbers, and as we shall see their rating methods differ materially from their competitors. S&P rates insurers who account for about 89 percent of premium volume, while Moody’s and Fitch rate a much smaller piece of the pie. As for niche carrier ratings, Demotech reviews approximately 2,000 P/C insurers each year and finalizes its financial stability rating on 10 percent of those companies.
Weiss Ratings approaches the process of evaluating insurance companies very differently from its older counterparts. A.M. Best, S&P, Moody’s and Fitch all work with insurers in an interactive rating process. Weiss develops its ratings from publicly available information with minimal input from and no participation by the rated insurers.
The interactive rating process allows the rated insurer to have an influence on the rating, but the net effect of that influence is the subject of some debate. A.M. Best, S&P, Moody’s, Fitch and the insurers they rate interactively maintain that the interactive process permits the rated companies to provide valuable information that ultimately produces more accurate ratings. Insurers also pay a rating fee for interactive ratings, fees that have become an important source of income to the rating agencies. Demotech maintains its rating process is predominantly quantitative, although insurers are encouraged to provide additional information from management, etc., and insurers also pay rating fees. Something of a renegade in the field, Weiss does not engage in the same type of interactive rating process and does not derive any income from the companies it rates (except to the extent that they purchase final ratings, their own or those of their competitors).
This leads to another important difference users will encounter when looking for ratings. A.M. Best, S&P, Moody’s and Fitch ratings are all available for the asking. Ratings for smaller niche carriers are also available from Demotech. Deriving income from ratings fees allows the agencies to give away the final product. The sale of ratings to users, on the other hand, is Weiss’ only source of ratings. You can still find a rating on the Internet, but it will cost you $14.95 unless you’ve purchased the unlimited rating service.
Because the company derives all its income from the sale of ratings to end users, Weiss Ratings maintains that it alone provides independent and unbiased ratings. “Really the value is in our independence, the fact that we actually work for the customer rather than the insurance companies that we rate,” explained Stephanie Eakins, a financial analyst at Weiss Ratings Inc. “We don’t have relationships with the companies that we rate, so there’s no bias in those ratings. They’re really strictly based on the financial performance of the companies.”
Although Eakins intimates that collecting rating fees from rated insurers necessarily biases the end product, other rating agencies take a different view. Debt rating agencies have charged rating fees for 25 or 30 years, and there have been a number of financial scandals over that time. Monster financial collapses such as Enron and Worldcom have not produced a swelling public outcry to reform rating agency operations. Nor have rating agencies found themselves the target of litigation as the accounting profession has. If collecting rating fees did indeed bias debt ratings, plaintiff’s attorneys have not turned up any evidence of it.
S&P takes pride in its high standards and expertise. “Users of our ratings tell us that they value our standard-setting, comprehensiveness, depth of analytic expertise, and transparency,” offered Steven Dreyer, managing director at S&P. “We believe that we are unique among rating agencies in publishing detailed articles of our ratings process for all to see. We have found that the open platform provides clarity for both rating users and insurers being rated, and helps elevate the nature of debate to more substantive issues by providing a window into our thinking about critical credit issues.”
Track record is A.M. Best’s yardstick for gauging ratings. “There is only one way to judge the value of a rating and that is to look at the long-term track record of the rating opinions”, commented Group Vice President Matthew Mosher. “Much can be made of high profile company failures, but when they are viewed in the context of the thousands of ratings published by A.M. Best, our track record remains strong.”
Telling the raters apart
From the way Weiss Ratings talks and from published reports, you might expect significant differences in the results the different raters produce. When you get down to brass tacks, however, differences in their ratings are more semantic than substantive. Based on year-end 2002 data A.M. Best assigned secure ratings to about 90 percent of the companies it rated. S&P was right behind at 87 percent and Weiss brought up the rear with about 75 percent of insurers receiving a secure rating. The differences are there to be sure, but they are not nearly as dramatic as some might believe.
The most noticeable distinction among the rating agency opinions lies in the ratings they assign and how they describe them. A.M. Best uses a scale from A++ to F; with its vulnerable ratings starting at B. Vulnerable ratings from S&P and Fitch begin at BB. Moody’s highest vulnerable rating is similar, Ba. Weiss, by contrast, assigns ratings of D, E and F to insurers that it considers vulnerable. The descriptive adjectives that accompany the ratings can be even more different, and downright confusing if you separate them from the letter grades. “Good” represents the middle of the secure range to Weiss, but the lowest secure rating to S&P and Moody’s. The different letter grades and descriptions rating agencies assign to essentially similar ratings is a source of confusion that end users will just have to live with.
Both A.M. Best and S&P support the quality of their ratings with studies that calculate insolvency rates for insurers by rating level. S&P’s Dreyer pointed out that of 20 insolvencies among property/casualty insurers during 2003, 14 were unrated and the remaining six carried vulnerable ratings. Backing up another two years reveals that at the start of 2001 S&P rated nine of the failed insurers, six in the vulnerable range and three at BBB, the lowest secure rating.
At press time A.M. Best had another insolvency report in the works, and in March of this year published a slightly different study that looked at impairments of insurance companies. Because the definition of impairment is broader than insolvency or default, the study methodology may be better attuned to the needs of ratings users because it measures events that disrupt an insurer’s operations but do not necessarily result in failure to make timely payment on all financial obligations. The results indicate that 0.06 percent insurers in the highest rating group (A++ and A+) became impaired within one year. The impairment rate for this group does not reach 1percent until six years after the rating assignment, and is only 4.65 percent 15 years later. Comparable numbers for companies rated D are 7.2 percent at the end of one year, and 50.94 percent 15 years down the road.
Another interesting finding in the report is that the probability of a rating change varies inversely with the rating. In other words, the lower the rating the more likely it is to change. That little tidbit just might be the best piece of information ratings users can derive from the report. Insurers rated at the bottom of the secure range are more likely to fall into the vulnerable range, and insurers in the vulnerable range are more likely to become impaired or insolvent. That reads like a hint with a sledgehammer that your own security requirements will work better if you set them above the lowest secure rating.
Joseph F. Mangan is an author, editor and consultant with more than 25 years of experience in property/casualty underwriting.