“The size and scope of this [$13 billion resolution with JPMorgan] should send a clear signal that the Justice Department’s financial fraud investigations are far from over. No firm, no matter how profitable, is above the law, and the passage of time is no shield from accountability,” [U.S. Attorney General Eric] Holder said. -The New York Times, Nov. 19, 2013


“May you live in interesting times,” goes the apocryphal Chinese curse, and that’s exactly what mortgage lenders and servicers have been doing since the financial crisis hit. Defaults and repurchase demands, fraud and foreclosure scandals, bankruptcies, and anemic housing numbers have all contributed to the malaise.

The U.S. Justice Department is not sweeping past abuses under the rug, and to top it off, the regulatory environment has been overhauled and become more onerous than ever. In this context, it would be easy to view quality control (QC) and compliance as yet another burden making lenders’ lives more difficult.

This is understandable. But while compliance requirements are tightening, it’s worth mentioning that the new rules are intended to avert the excesses (and losses) of the past. More importantly, lenders are not helpless in the face of the new challenges. A proactive approach to QC can actually contribute to the bottom line.


How QC improves the bottom line

The primary objectives of a QC department are to minimize QC costs by sampling efficiently, targeting intelligently and streamlining QC processes; making QC more effective by prioritizing defects by cost/risk, identifying significant outliers and issues, and providing an efficient feedback loop for corrective actions; and using QC to reduce losses (minimizing fraud, repurchases, claim denials, etc.), improve investor pricing and reduce regulatory risk.

In other words, the more efficiently a lender can ensure that its product is high quality, the better the profit margins. Over the long term, if a lender spends X dollars more per loan to improve a process and that effort results in a superior product that is valued at X+Y dollars more per loan, then you have contributed to the bottom line. Servicing portfolio pricing is already differentiated in the secondary market by servicer ratings; could originations be far behind?

This is a proactive approach to QC that takes control of the QC process to improve business profitability. In contrast, the more pervasive approach to QC (at least pre-crisis) has been a reactive one - viewing QC as a required but unwelcome cost of doing business, and doing the minimum necessary to meet the latest requirements.



The high cost of poor quality

How do you turn QC into a profit center? By tying defects to lower profits or higher costs. In a nutshell, identify the factors that affect your cost-per-loan and value-per-loan, prioritize them by weight or potential risk, and base your sampling and audit reviews on risk.

Among the factors that affect your cost-per-loan are rejects, repurchases, claim denials, regulatory fines and penalties, and preventable losses. These factors may have different impacts on value depending on asset disposition (portfolio retention, secondary market pricing, servicing value/pricing).

Once risks are identified, the next step is to assign an average cost to each risk - i.e., the component cost of poor quality. This type of analysis does not have to be precise. (It’s actually very difficult to be precise about calculating the cost of different types of errors because you have to consider the probability that a defect will result in an incurred cost, as well as the likely cost itself.) The goal is not precision but accuracy. The idea is to put your risks on a relative scale so you can assess their impact and manage risk through discretionary sampling and targeted quality improvement. You can start by weighing potential hazards, or you can use approximate dollar cost amounts. The end result should be something like the accompanying table.

The word “defect” has negative connotations. Some lenders go to unusual lengths to avoid using the term, substituting various euphemisms in its place. But if you are going to use statistical methods to make your QC processes efficient and effective, you must make a binary decision about each loan reviewed: Is it acceptable or defective? This decision allows you to derive a defect rate. A defect rate can be used as a basic metric of quality, but it can also be used to optimize sampling - that is, to determine the minimum number of loans that must be sampled in order to make precise and confident inferences to the population of loans from which the sample is drawn.

In this context, it’s worth mentioning Fannie’s new guidelines for QC, released June 30, 2013. These guidelines not only require lenders to track defect rates (good) but also introduce the notion of “gross” defect rate versus “net” defect rate (not so good).

A “net” defect rate is derived by fixing certain errors in the loan file. Whether it’s QC’s job to fix those errors is up for debate. Unfortunately, by making this distinction an official one, Fannie puts the focus back on fixing individual loans (in order to lower the apparent defect rate) rather than on fixing the loan origination (or servicing) process. The danger is that lenders will start using “net” defect rate in their sample size calculations, which is wrong. Why?

Loans that can be fixed after closing still cost the lender substantially more than loans done right the first time. And what about all the similarly defective loans in the population that weren’t sampled? Consider that an error that can be fixed 30 to 60 days after close may not be so “fixable” if the loan goes delinquent 10 months after close and is now a repurchase candidate. This means you can’t reliably extrapolate from a “net” sample defect rate to “net” population defect rate (or interval).

Speaking of extrapolation, The U.S. Department of Housing and Urban Development is currently proposing a new rule that would pose a significant risk to lenders, especially the largest originators. By using statistical sampling to estimate the defect rate of each lender’s overall Federal Housing Administration (FHA) portfolio, and then extrapolating that defect rate against all lender originations during the sampling period, the plan is to require lenders to compensate the FHA for the estimated total risk to the FHA. This may push some lenders to minimize their apparent defect rate by fixing loans and reporting only their net defect rates.

If you agree that QC’s role is not to fix loans but to uncover quality issues, track them to root causes and improve the origination or servicing process, then the defect rate to be reported and used for sample size calculation is the “gross” defect rate.



Choosing the right metrics

Fannie’s new guidelines also say that lenders should track defect rates by severity, such as “moderate defects” versus “significant defects.” This confuses “defects,” which are loan-level ratings, with “errors” or “findings,” which are audit question-level ratings. (One or more errors may result in a defective loan.)

Fannie could take the opportunity to clarify this distinction. Otherwise, lenders will continue to report “error rates” such as “total number of findings” (audit question level) divided by “total number of loans” (loan level). This is neither fish nor fowl; by this measure, there is no telling whether the errors came from a single loan or multiple loans. An alternative measure would be total number of errors divided by total number of opportunities for error (i.e., number of loans multiplied by number of audit questions) - or even better, total defective loans divided by total loans reviewed.

Misleading metrics are often combined with poor statistical practices so that inferences to the population are impossible. Because much of your QC and compliance activities will center on the metrics you choose, choose your metrics carefully.

What makes a good metric? What follows are the basics:

The last item could be a significant point of divergence with government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac. The GSEs want to buy good-quality loans. But lenders want to originate good-quality loans at a reasonable cost. And they differ in their appetite for risk. As the GSEs and regulators attempt to patch pre-crisis holes in origination and servicing processes, their primary concern is not the economics of lending. We have seen QC requirements become more comprehensive and the regulatory environment become tighter. There are more restrictions, added complexity, more audits required, lower tolerance for error and higher risk of penalties. Almost certainly, with the Consumer Financial Protection Bureau’s assuming oversight of all originators and servicers, it’s likely that non-banks will also see more stringent standards than they’ve been used to.

In response, many lenders now have auditors who check the original QC auditors, and a further layer of checkers to check the checkers. All of this costs money, and no one is offering to help defray the costs. So it’s up to each lender to develop a well-thought-out QC and compliance plan that fits its book of business and can be done efficiently and effectively.


Efficient quality control

Once you have established your key metrics, including the definition of a “defect,” the next step is to find out where the defects are occurring in your origination and servicing processes. This is where an intelligent, replicable statistical methodology can have a big impact. The objective is to achieve the greatest insight into your processes (depth and breadth) with the least effort and cost. While a comprehensive statistical methodology is beyond the scope of this article, there are some basic principles worth mentioning:

The House Financial Services Committee recently estimated that the U.S. financial sector is already spending over 24 million man hours per year complying with Dodd-Frank. This is apparently 4 million man hours longer than it took to build the entire Panama Canal.

But odds are that most of those hours are not efficiently spent, which means there is opportunity. Lenders and servicers can focus their QC and compliance efforts, streamline their operations, and make real contributions to their bottom lines.


Kaan Etem is senior vice president for Cogent QC Systems. He can be reached at kaan.etem@cogentqc.com.

Quality Control

Quality Control And The Bottom Line

By Kaan Etem

The big challenge for lenders today is making quality control programs cost-efficient.










sme body sme body b sme bod bi sme bod i hyperlink sme body sme body sme body sme body sme body

sme last graph

sme subhead

sme authorbio

SME sidebar headline

sme sidebar 1stpara

sme sidebar bodycopy

sme sh

sme sh ital

sme cal h

sme cntct

sme cal events