Taking A Second Look At Today’s Technology Risks

Written by Jordan Brown
on May 16, 2012 No Comments
Categories : Required Reading

11560_bizarre Taking A Second Look At Today's Technology Risks REQUIRED READING: Every organization takes risks, but sometimes there are hidden cracks below the foundation of a company that pose serious problems. During the recent financial crisis, it became apparent that there were more than a few firms with well-intentioned assumption models and applications that simply fed and justified poor decisions.

As part of the industry's healing process, lenders must not only change their lending practices, but also take a new look at their technology infrastructure and the key assumptions within the applications themselves.

The tough part of running and building the right technology portfolio is to balance the business requirements of the end users with the right mix of scalability, transparency and support infrastructure. When it works, it brings great dividends and becomes a unique business process. The right selection, training, implementation and process adoption yield significant returns.

Unfortunately, many financial institutions do not have the internal staff, expertise or time to think through the many issues associated with application and model risk. The risk of failure is not simply limited to inadequate implementation or cost overrun. Poor choices translate into higher costs, inefficient controls and limited operational scalability. Notwithstanding those issues, a much more lethal organizational fault is when there are weak analytical models that provide assumptions for application technology.

This is something that lenders, regulators and auditors should take a hard look at when reviewing operational risk as well as the potential systemic risk to the overall health of the industry. To address these concerns, let's dig into the details and lay out the true technology risk and how to build an internal evaluation framework to manage this problem.

An assessment framework is a helpful tool in evaluating the business risks, as well as in identifying and tackling problem areas. There are eight key areas that the assessment needs to consider.

Organizational review and model definition. It is crucial to identify all software applications and feeder assumption models. Building robust disaster recovery, documentation, test plans and policies for all assumption models and software applications is an important step to really dig into the variables behind the models.

Key variables-assumption sets. Many vendor-provided assumption sets require documentation, training and independent validation. Special care should be taken to identify any self-created spreadsheet models that feed applications. Some typical problem areas are the product pricing calculations, whole loan/bulk commitment pricing models, loan fallout assumptions, hedge ratios, delivery timing estimates, interest-rate path definition, default probability, default severity, prepayment propensity and current market-adjusted loan-to-value calculations.

Operational process flow diagram. An operational flow diagram is an effective tool to trace the business process flow. It provides a simple way to trace dependencies, identify choke points and discover opportunities to improve workflow.

Application flow chart and business flow overlay.
Together with the process flow diagram, an application flow chart identifies all of the relevant applications that touch a mortgage loan. The key here is to include all integrated parties and assumption models that feed the application layer. By understanding the moving parts, it is possible to assess the potential risk, as well as develop management strategies and controls.

Testing and upgrade process. The testing framework, business process and upgrade path for each application need to be carefully considered. Each new upgrade poses a risk to the status quo. Most information technology organizations are old pros at this process and handle it well. The issue really becomes relevant when there are integration partners, rogue applications and dependent spreadsheets that feed into well-managed systems.

Best-practices technology evaluations. Most organizations use a fraction of the potential functionality included by vendors. Vendors spend years building applications, and it makes sense to take advantage of their expertise in reviewing the use of their system. A best-practices technology evaluation can often uncover problem areas and help streamline workflow. This is a high-payback, effective use of resources if completed by the system expert with a solid mortgage background.

Performance benchmarks. All technology investments should be accompanied by performance benchmarks that include financial and operational metrics. A best practice is to establish the performance benchmarks up front. This is reinforced quite effectively when the expected return on investment from technology is included in department budgets. Operational performance should include system availability and usability studies.

Audit guidance and systemic risk. There is ample room for innovation and improvement across the mortgage technology sector. From a legacy vendor to a start-up software firm, the rules of the game should be the same, and a common framework should be established to manage, build, document and test applications.

There is significant risk inherent in applications that are simply grandfathered in because of their legacy status. Many lenders are aware of application shortfalls and put into place a patchwork quilt of work-around processes and niche applications to fill the gaps. Each of these areas should be carefully reviewed and documented, and responsible teams should be identified to ensure proper assumptions and controls are applied.

New vendor selection

There are hundreds of vendors across the mortgage space. The selection of a new vendor should be guided by a consistent process. Practical business requirements need to be balanced with operational and technical benchmarks to create a best in class approach. Unfortunately, firms seldom place enough emphasis on the character of the vendor organization. It is absolutely critical to understand the principles, financial stability, business plans, enhancement strategy and customer support personnel.Â

Financial institutions must spend the time and have the diligence to evaluate the financial stability and trajectory of the vendor that they are selecting to support their operations. It is a mistake to simply default to the largest vendors, because small technology firms are often the most innovative providers. Â

The viability of a vendor organization is tested when the company hit with changes in personnel or challenged with new technologies that require a platform upgrade. Excellence is achieved by demonstrating the ability to adapt to new business requirements, embrace technical advances and partner to build a competitive advantage. Simply selecting a vendor because it has the largest market share or balance sheet may be a mistake.

A best-practice approach is to develop a very clear process in which all vendors are assessed on the same basis. It should include the return-on-investment analysis, expected payback time frame, internal and external implementation costs, controls, contingency planning, and dependencies.

Vendor management is an ongoing risk management process. Performance benchmarks establish operating metrics to evaluate effectiveness. It is important to monitor both the financial and operational viability of existing vendors. Existing vendors need to deliver on an ongoing basis and keep their technology consistent with both the business and technical requirements. The hard work for a vendor starts after the sale.

Model risk

The model risk spans the gamut from secondary marketing to product and pricing engines, default management, servicing evaluation and prepayment models. In all of these areas, basic calculations are fairly straightforward and well documented by most vendors. It is important, however, for end users to understand the math, assumptions and all implications.

If a mortgage banker does not understand how a number is derived and its resulting effects, he or she needs to act quickly before deployment. Always keep in mind that software designers are human, and mistakes exist in virtually all software applications. It is important to have a strong understanding of assumption models.

Independent external validation is a pretty solid choice, because it brings a fresh perspective. Another complementary approach is to deploy multiple analytical models. This is beneficial when a firm understands the differences among the models and can calibrate the results.

The bottom line is that each analytical model deployed should be subject to scrutiny, documentation and validation. Often, the key is to look beyond the most obvious applications – secondary risk system, servicing valuation model and default models – and keep careful tabs on both the underlying assumption sets and spreadsheets that feed commercial applications.

As a general rule, documentation, testing, cross-training and independent external review are the best approaches to tackling model risk.

As a practicality, lenders need to build or acquire the best applications and assessment tools to make decisions. There needs to be a balance struck between the IT group and business unit that provides the infrastructure to document, test, deploy and manage applications.

Technology greatly assists the mortgage process. The smartest organizations embrace the vendor community, and vendors, in turn, build solid working relationships that drive innovation. New ideas, innovation and streamlined process flows decrease costs. A careful eye on the application flow and accompanying business processes is essential in order to minimize technology risk.

Jordan Brown is managing principal of MarketWise Advisors LLC, headquartered in Ponte Vedra Beach, Fla. He can be reached at (800) 815-9484.

Register here to receive our Latest Headlines email newsletter




Leave a Comment