From Risk to Strategy : Embracing the Technology Shift

The role of the risk manager has always been to understand and manage threats to a given business. In theory, this involves a very broad mandate to capture all possible risks, both current and future. In practice, however, some risk managers are assigned to narrower, siloed roles, with tasks that can seem somewhat disconnected from key business objectives.

Amidst a changing risk landscape and increasing availability of technological tools that enable risk managers to do more, there is both a need and an opportunity to move toward that broader risk manager role. This need for change – not only in the risk manager’s role, but also in the broader approach to organizational risk management and technological change – is driven by five factors.

Marsh Ex 1

The rapid pace of change has many C-suite members questioning what will happen to their business models. Research shows that 73 percent of executives predict significant industry disruption in the next three years (up from 26 percent in 2018). In this challenging environment, risk managers have a great opportunity to demonstrate their relevance.

USING NEW TOOLS TO MANAGE RISKS

Emerging technologies present compelling opportunities for the field of risk management. As discussed in our 2017 report, the three levers of data, analytics, and processes allow risk professionals a framework to consider technology initiatives and their potential gains. Emerging tools can support risk managers in delivering a more dynamic, in-depth view of risks in addition to potential cost-savings.

However, this year’s survey shows that across Asia-Pacific, risk managers still feel they are severely lacking knowledge of emerging technologies across the business. Confidence scores were low in all but one category, risk management information systems (RMIS). These scores were only marginally higher for respondents in highly regulated industries (financial services and energy utilities), underscoring the need for further training across all industries.

Marsh Ex 3

When it comes to technology, risk managers should aim for “digital fluency, a level of familiarity that allows them to

  • first determine how technologies can help address different risk areas,
  • and then understand the implications of doing so.

They need not understand the inner workings of various technologies, as their niche should remain aligned with their core expertise: applying risk technical skills, principles, and practices.

CULTIVATING A “DIGITAL-FIRST” MIND-SET

Successful technology adoption does not only present a technical skills challenge. If risk function digitalization is to be effective, risk managers must champion a cultural shift to a “digital-first” mindset across the organization, where all stakeholders develop a habit of thinking about how technology can be used for organizational benefit.

For example, the risk manager of the future will be looking to glean greater insights using increasingly advanced analytics capabilities. To do this, they will need to actively encourage their organization

  • to collect more data,
  • to use their data more effectively,
  • and to conduct more accurate and comprehensive analyses.

Underlying the risk manager’s digitalfirst mind-set will be three supporting mentalities:

1. The first of these is the perception of technology as an opportunity rather than a threat. Some understandable anxiety exists on this topic, since technology vendors often portray technology as a means of eliminating human input and labor. This framing neglects the gains in effectiveness and efficiency that allow risk managers to improve their judgment and decision making, and spend their time on more value-adding activities. In addition, the success of digital risk transformations will depend on the risk professionals who understand the tasks being digitalized; these professionals will need to be brought into the design and implementation process right from the start. After all, as the Japanese saying goes, “it is workers who give wisdom to the machines.” Fortunately, 87 percent of PARIMA surveyed members indicated that automating parts of the risk manager’s job to allow greater efficiency represents an opportunity for the risk function. Furthermore, 63 percent of respondents indicated that this was not merely a small opportunity, but a significant one (Exhibit 6). This positive outlook makes an even stronger statement than findings from an earlier global study in which 72 percent of employees said they see technology as a benefit to their work

2. The second supporting mentality will be a habit of looking for ways in which technology can be used for benefit across the organization, not just within the risk function but also in business processes and client solutions. Concretely, the risk manager can embody this culture by adopting a data-driven approach, whereby they consider:

  • How existing organizational data sources can be better leveraged for risk management
  • How new data sources – both internal and external – can be explored
  • How data accuracy and completeness can be improved

“Risk managers can also benefit from considering outside-the-box use cases, as well as keeping up with the technologies used by competitors,” adds Keith Xia, Chief Risk Officer of OneHealth Healthcare in China.

This is an illustrative rather than comprehensive list, as a data-driven approach – and more broadly, a digital mind-set – is fundamentally about a new way of thinking. If risk managers can grow accustomed to reflecting on technologies’ potential applications, they will be able to pre-emptively spot opportunities, as well as identify and resolve issues such as data gaps.

3. All of this will be complemented by a third mentality: the willingness to accept change, experiment, and learn, such as in testing new data collection and analysis methods. Propelled by cultural transformation and shifting mind-sets, risk managers will need to learn to feel comfortable with – and ultimately be in the driver’s seat for – the trial, error, and adjustment that accompanies digitalization.

MANAGING THE NEW RISKS FROM EMERGING TECHNOLOGIES

The same technological developments and tools that are enabling organizations to transform and advance are also introducing their own set of potential threats.

Our survey shows the PARIMA community is aware of this dynamic, with 96 percent of surveyed members expecting that emerging technologies will introduce some – if not substantial – new risks in the next five years.

The following exhibit gives a further breakdown of views from this 96 percent of respondents, and the perceived sufficiency of their existing frameworks. These risks are evolving in an environment where there are already questions about the relevance and sufficiency of risk identification frameworks. Risk management has become more challenging due to the added complexity from rapid shifts in technology, and individual teams are using risk taxonomies with inconsistent methodologies, which further highlight the challenges that risk managers face in managing their responses to new risk types.

Marsh Ex 9

To assess how new technology in any part of the organization might introduce new risks, consider the following checklist :

HIGH-LEVEL RISK CHECKLIST FOR EMERGING TECHNOLOGY

  1. Does the use of this technology cut across existing risk types (for example, AI risk presents a composite of technology risk, cyber risk, information security risk, and so on depending on the use case and application)? If so, has my organization designated this risk as a new, distinct category of risk with a clear definition and risk appetite?
  2. Is use of this technology aligned to my company’s strategic ambitions and risk appetite ? Are the cost and ease of implementation feasible given my company’s circumstances?
  3. Can this technology’s implications be sufficiently explained and understood within my company (e.g. what systems would rely on it)? Would our use of this technology make sense to a customer?
  4. Is there a clear view of how this technology will be supported and maintained internally, for example, with a digitally fluent workforce and designated second line owner for risks introduced by this technology (e.g. additional cyber risk)?
  5. Has my company considered the business continuity risks associated with this technology malfunctioning?
  6. Am I confident that there are minimal data quality or management risks? Do I have the high quality, large-scale data necessary for advanced analytics? Would customers perceive use of their data as reasonable, and will this data remain private, complete, and safe from cyberattacks?
  7. Am I aware of any potential knock-on effects or reputational risks – for example, through exposure to third (and fourth) parties that may not act in adherence to my values, or through invasive uses of private customer information?
  8. Does my organization understand all implications for accounting, tax, and any other financial reporting obligations?
  9. Are there any additional compliance or regulatory implications of using this technology? Do I need to engage with regulators or seek expert advice?
  10. For financial services companies: Could I explain any algorithms in use to a customer, and would they perceive them to be fair? Am I confident that this technology will not violate sanctions or support crime (for example, fraud, money laundering, terrorism finance)?

SECURING A MORE TECHNOLOGY-CONVERSANT RISK WORKFORCE

As risk managers focus on digitalizing their function, it is important that organizations support this with an equally deliberate approach to their people strategy. This is for two reasons, as Kate Bravery, Global Solutions Leader, Career at Mercer, explains: “First, each technological leap requires an equivalent revolution in talent; and second, talent typically becomes more important following disruption.”

While upskilling the current workforce is a positive step, as addressed before, organizations must also consider a more holistic talent management approach. Risk managers understand this imperative, with survey respondents indicating a strong desire to increase technology expertise in their function within the next five years.

Yet, little progress has been made in adding these skills to the risk function, with a significant gap persisting between aspirations and the reality on the ground. In both 2017 and 2019 surveys, the number of risk managers hoping to recruit technology experts has been at least 4.5 times the number of teams currently possessing those skills.

Marsh Ex 15

EMBEDDING RISK CULTURE THROUGHOUT THE ORGANIZATION

Our survey found that a lack of risk management thinking in other parts of the organization is the biggest barrier the risk function faces in working with other business units. This is a crucial and somewhat alarming finding – but new technologies may be able to help.

Marsh Ex 19

As technology allows for increasingly accurate, relevant, and holistic risk measures, organizations should find it easier to develop risk-based KPIs and incentives that can help employees throughout the business incorporate a risk-aware approach into their daily activities.

From an organizational perspective, a first step would be to describe risk limits and risk tolerance in a language that all stakeholders can relate to, such as potential losses. Organizations can then cascade these firm-wide risk concepts down to operational business units, translating risk language into tangible and relevant incentives that encourages behavior that is consistent with firm values. Research shows that employees in Asia want this linkage, citing a desire to better align their individual goals with business goals.

The question thus becomes how risk processes can be made an easy, intuitive part of employee routines. It is also important to consider KPIs for the risk team itself as a way of encouraging desirable behavior and further embedding a risk-aware culture. Already a majority of surveyed PARIMA members use some form of KPIs in their teams (81 percent), and the fact that reporting performance is the most popular service level measure supports the expectation that PARIMA members actively keep their organization informed.

Marsh Ex 21

At the same time, these survey responses also raise a number of questions. Forty percent of organizations indicate that they measure reporting performance, but far fewer are measuring accuracy (15 percent) or timeliness (16 percent) of risk analytics – which are necessary to achieve improved reporting performance. Moreover, the most-utilized KPIs in this year’s survey tended to be tangible measures around cost, from which it can be difficult to distinguish a mature risk function from a lucky one.

SUPPORTING TRANSFORMATIONAL CHANGE PROGRAMS

Even with a desire from individual risk managers to digitalize and complement organizational intentions, barriers still exist that can leave risk managers using basic tools. In 2017, cost and budgeting concerns were the single, standout barrier to risk function digitalization, chosen by 67 percent of respondents, well clear of second placed human capital concerns at 18 percent. This year’s survey responses were much closer, with a host of ongoing barriers, six of which were cited by more than 40 percent of respondents.

Marsh Ex 22

Implementing the nuts and bolts of digitalization will require a holistic transformation program to address all these barriers. That is not to say that initiatives must necessarily be massive in scale. In fact, well-designed initiatives targeting specific business problems can be a great way to demonstrate success that can then be replicated elsewhere to boost innovation.

Transformational change is inherently difficult, in particular where it spans both technological as well as people dimensions. Many large organizations have generally relied solely on IT teams for their “digital transformation” initiatives. This approach has had limited success, as such teams are usually designed to deliver very specific business functionalities, as opposed to leading change initiatives. If risk managers are to realize the benefits of such transformation, it is incumbent on them to take a more active role in influencing and leading transformation programs.

Click here to access Marsh’s and Parima’s detailed report

Internal Audit’s Guide to Planning, Managing and Addressing Risks

As time passes and the modern-day enterprise evolves, so does the role of the internal auditor. What was once a function that was perceived as rule enforcers and compliance police is expanding into one that is a trusted advisor within the business. The last several years have introduced an enormous amount of change, but the proliferation of technology within the enterprise is accelerating every aspect; from operations to decision making.

The progressive steps organizations are taking as a result of the digital age present a bevy of benefits, but in turn, create a slew of challenges and risks. Subsequently, the internal audit function has been forced to adapt along the way, assuring key stakeholders in the business that risks have been identified, but above all, addressed and mitigated.

While identifying and managing risks tied to the business fall on management, it’s internal audit’s responsibility to focus on closing the loop. That’s why our second article focuses on the effective audit follow up, in addition to outlining the how and when tied to escalating risks.

A DYNAMIC AND ITERATIVE PROCESS

The COSO Internal Control – Integrated Framework (2013) provides that a “risk assessment involves a dynamic and iterative process for identifying and assessing risks to the achievement of objectives.” (emphasis added). To be effective, internal audit should be aware of and responsive to changes in known risks and additionally the emergence of new ones.

A purpose for the traditional (i.e., annual risk assessment) is to allow internal audit to develop a planning horizon which is understood by stakeholders and, in particular, executive management and the audit committee as a basis for the risks identified. In this process there can also be a push to finalize the internal audit “plan” so that budgets, schedules and staffing can be arranged.

With the emerging concept of “risk velocity”—measuring how fast a risk may affect an organization—is recognition that the typical risk assessment process is one that is not dynamic and iterative nor responsive to change in real time. Change does not occur on an annual basis. The move to a continuous and dynamic audit plan is significant for most internal audit departments. Some departments are already moving on this path and have had to adjust from a static process focused on listening to management on a seasonal basis to monitoring business objectives and risks that are rapidly changing.

Tony Redlinger, internal audit director with IHS Markit, observes the difficulties of the timely capture of risks as “asking the pertinent questions often without the broader knowledge of what the business is getting into, where the technology often advances much faster than the controls.”

BEYOND THE TYPICAL INTERNAL AUDIT RISK ASSESSMENT

What approaches internal audit functions can take to ramp up the process to achieve more dynamic audit planning?

One technique is to increase the frequency of the process and design a rolling service of assessments and audit planning. If existing processes can be made more streamlined and efficient, the time trajectory can be intensified to occur more frequently. Potentially, a concerted effort can result in an audit plan being updated every six months instead of annually. Since the risk identification process ideally is ongoing, management should be encouraged to implement a schedule to periodically review risks, while reserving the ability to accelerate reviews if a company objective changes, or risk factors increase.

For example, if management is considering an acquisition in a new jurisdiction, it could require the reevaluation of risk factors to determine how the decision could impact operations. Such processes can be formally linked into internal audit planning. Of course, existing sources of risk information should be identified and integrated into internal audit planning.

Other assessment processes including Enterprise Risk Management activities, department self-assessments and other functionspecific reviews in high-impact areas depending on industry (e.g., environmental hazards, cybersecurity threats, etc.), should connect and feed into internal audit processes.

Internal Audit 1

TECHNOLOGY TOOLS AND REALISM ABOUT SURVEYS

In the typical risk assessment, preparatory materials are provided and participants are asked a series of questions during sessions with audit staff. This process is expected to produce information to guide the allocation of resources and activities within internal audit so as to optimize the match between the company’s greatest risks and the corresponding mitigation efforts. The availability of sophisticated technology tools such as online surveys can seem to make it cheap and easy to gather voluminous data from a larger population, and to conduct statistical analysis of that data.

Dr. Hernan Murdock, vice president of the audit division at MISTI, finds surveys and questionnaires to be a technique to collect information. “[Questionnaires] promote risk and control awareness, while encouraging transparency and accountability,” he says.

Potentially, this means we can conduct a much larger assessment with the same resources. There is definitely a place for crowdsourcing risk as well as casting a wide net for particular fact patterns of concern, such as use of third-party sales intermediaries or collection of consumer personal data. Still, more data is not always better data. The essence of a good risk assessment is not popular opinion, mechanically sliced and diced; it is informed opinion and expert judgment applied to the facts. Be careful with gathering far more data than can be followed up on or that can be analyzed meaningfully which can result in human-judgment bottlenecks in the process.

Ordinarily, risk assessments gather information from senior executives and managers, as well as a sample of senior operational personnel in the business units. To the extent that “risk owners” are not in these groups, they are usually sought out, and sometimes manager-level input is also requested.

Front-line workers should be considered as well. It’s usually those who are in the details on a daily basis that have the best perspectives on risks and low-hanging fruit when it comes to increasing operational efficiency.

THE RISK OF THE INTERNAL AUDIT RISK ASSESSMENT

Here we are not talking about the risk assessment that drives the audit plan. Rather, this is the risk that the internal audit function itself will not achieve its objectives as a result of the risk assessment. Should you perform this type of quality engagement as well? See IIA’s Standards for the Professional Practice of Internal Auditing 2120—Risk Management: “The internal audit activity must evaluate the effectiveness and contribute to the improvement of risk management processes.”

The internal audit function in this regard should consider risks such as:

  • The potential that the audit risk assessment is inaccurate or incomplete leading to an ineffective audit plan
  • Audit staffing that is insufficient in terms of quality and capacity to deliver useful results on every engagement
  • Changes in business and risk not promptly identified so that the audit plan can be updated
  • Audit communications failing to provide information organizational stakeholders need, when they need it
  • Governance roles not able to understand audit results and their implications for management of the organization

Internal Audit 2

Beyond Quality: The Four-Part Approach for Audit Efficiency and Effectiveness

STEP 1: PLAN FOR ORGANIZATIONAL GROWTH

While the concept of quality is uniform for internal auditors of different varieties and capacities, effectiveness and efficiency can vary from organization to organization. Accordingly, clear definitions for these terms—the expectations for your team—must be established and adopted to plan for growth.

Use these questions as guidance when defining exactly what effectiveness and efficiency mean for you and your team:

  • Are we equipped with the up-to-date tools needed to conduct the best work possible?
  • Do we have the right resources and skill sets required to deliver our audit plan?
  • Are we contributing to organizational improvement? If so, can others see this?
  • Have we obtained any validation of our team’s quality, such as notification from managers or executives?
  • Is feedback effectively distributed to team members, so they know what areas to improve?
  • What quantifiable metrics can we associate with these definitions?

While you and your team’s definitions of effectiveness and efficiency are crucial, it is also important to gain the approval of key stakeholders involved in internal audit.

A major reason that process improvement initiatives fail, according to one Harvard Business Review article is that the people whose work will be directly impacted are often left out of the process.

Accordingly, feedback from stakeholders at the helm of the financial success of your company should also be incorporated. Here are a few stakeholders who should weigh in on your definitions of effectiveness and efficiency:

  1. Internal stakeholders: Board of directors, audit committee, executives, senior management and department leads
  2. External stakeholders: Regulators, standard-setters, vendors, customers and external audit teams

STEP 2: DO THE WORK NEEDED TO SET EXPECTATIONS

The second step of this process continues to articulate the definitions of effectiveness and efficiency, and sets expectations for your team.

By this stage, you should have an internal definition of effectiveness and efficiency, and you have tempered that definition in the context of what key internal and external stakeholders need. To better set your organization up for success, make these definitions more actionable and specific through the assignation of qualitative and quantitative metrics.

As described in a Forbes article, Forrester reports 74 percent of firms say they want to be “data-driven,” but only 29 percent are actually successful at connecting analytics to action. Actionable insights appear to be the missing link for companies that want
to drive business outcomes from their data.

Make these definitions more actionable and specific for your team by assigning qualitative and quantitative metrics for each. To collect qualitative and quantitative metrics, try the following tactics:

  • Look back at past performance data to determine quantitative metrics:
    • How many audits were scheduled?
    • How many were completed?
    • How was staff utilized?
    • What were the budgeted hours as compared to the actual hours?
  • Go on a listening tour of departments impacted by your work to determine qualitative metrics:
    • What do clients think of your team’s performance?
    • What do other internal stakeholders think of your team’s performance?
    • Do they consider you and your team leaders in their role or order-takers?
    • Would they want to engage in future projects with your team?

With these actionable definitions in hand, the expectations for your team should be crystal clear. It is ultimately up to chief audit executives to hold their teams accountable for efficient and effective—along with quality—work.

STEP 3: CHECK PROGRESS AGAINST SET EXPECTATIONS

To check the quality, effectiveness, and efficiency of your team’s work, internal audit leaders should look at individual performance on an ongoing basis—not just an annual one. After all, it is easier and less problematic for leaders to reevaluate individual performance in small increments before it becomes a major issue.

In organizations of all sizes, a traditional once-per-year approach to employee reviews is fading away in favor of more ongoing ones. As a Washington Post article describes, today’s employees have come to expect instant feedback in many other areas of their lives, and performance reviews should be the same. Besides, the article states, one report found that two-thirds of employees who receive the highest scores in a typical performance management system are not actually the organization’s highest performers.

Chief audit executives should encourage the completion of self-appraisals. A Harvard Business Review article explains that an effective self-appraisal should focus on what you have accomplished and talk about weaknesses carefully, using language with an emphasis on growth and improvement, rather than admonishment. Highlight your team’s blind spots that they might not be aware exists.

In short, employees want more frequent and iterative assessments of their work, and internal audit leaders need to step up to deliver this and ensure quality, effectiveness, and efficiency at all stages.

STEP 4: ACT UPON WHAT YOU HAVE LEARNED

By this step, internal audit leaders have an array of tools at their disposal, including:

  • Actionable definitions of effectiveness and efficiency for their teams
  • Qualitative and quantitative metrics to bolster these definitions
  • Information gathered from self- and manager-guided evaluations
  • An understanding of how team members have performed along these guidelines

With this information in hand, many opportunities for growth are apparent—simply compare where you want your team members to be against where they are right now. By
implementing these fact-based changes into your internal audit processes, leaders set the stage for cyclical organizational and personal improvement.

According to a survey, this type of continuous improvement yields a positive ROI for organizations, helping increase revenue, along with saving time and money—an average annual impact of $6,000. Additionally, these improvements are designed to compound with each cycle.

Just as the approach to monitoring and improving audit quality is ongoing and cyclical—there are always improvements yet to be made—this approach to improving effectiveness and efficiency is fluid as well.

By weaving this four-part process into the fabric of your internal audit methodology, leaders can improve effectiveness and efficiency in their organizations.

 

Click here to access Workiva’s and MISTI’s White Paper

Mastering Risk with “Data-Driven GRC”

Overview

The world is changing. The emerging risk landscape in almost every industry vertical has changed. Effective methodologies for managing risk have changed (whatever your perspective:

  • internal audit,
  • external audit/consulting,
  • compliance,
  • enterprise risk management,

or otherwise). Finally, technology itself has changed, and technology consumers expect to realize more value, from technology that is more approachable, at lower cost.

How are these factors driving change in organizations?:

Emerging Risk Landscapes

Risk has the attention of top executives. Risk shifts quickly in an economy where “speed of change” is the true currency of business, and it emerges in entirely new forms in a world where globalization and automation are forcing shifts in the core values and initiatives of global enterprises.

Evolving Governance, Risk, and Compliance Methodologies

Across risk and control oriented functions spanning a variety of audit functions, fraud, compliance, quality management, enterprise risk management, financial control, and many more, global organizations are acknowledging a need to provide more risk coverage at lower cost (measured in both time and currency), which is driving re-inventions of methodology and automation.

Empowerment Through Technology

Gartner, the leading analyst firm in the enterprise IT space, is very clear that the convergence of four forces—Cloud, Mobile, Data, and Social—is driving the empowerment of individuals as they interact with each other and their information through well-designed technology.

In most organizations, there is no coordinated effort to leverage organizational changes emerging from these three factors in order to develop an integrated approach to mastering risk management. The emerging opportunity is to leverage the change that is occurring, to develop new programs; not just for technology, of course, but also for the critical people, methodology, and process issues. The goal is to provide senior management with a comprehensive and dynamic view of the effectiveness of how an organization is managing risk and embracing change, set in the context of overall strategic and operational objectives.

Where are organizations heading?

“Data Driven GRC” represents a consolidation of methodologies, both functional and technological, that dramatically enhance the opportunity to address emerging risk landscapes and, in turn, maximizing the reliability of organizational performance.

This paper examines the key opportunities to leverage change—both from a risk and an organizational performance management perspective—to build integrated, data-driven GRC processes that optimize the value of audit and risk management activities, as well as the investments in supporting tools and techniques.

Functional Stakeholders of GRC Processes and Technology

The Institute of Internal Auditors’ (IIA) “Three Lines of Defense in Effective Risk Management and Control” model specifically addresses the “who and what” of risk management and control. It distinguishes and describes three role- and responsibility-driven functions:

  • Those that own and manage risks (management – the “first line”)
  • Those that oversee risks (risk, compliance, financial controls, IT – the “second line”)
  • Those functions that provide independent assurance over risks (internal audit – the “third line”)

The overarching context of these three lines acknowledges the broader role of organizational governance and governing bodies.

IIAA

Technology Solutions

Data-Driven GRC is not achievable without a technology platform that supports the steps illustrated above, and integrates directly with the organization’s broader technology environment to acquire the data needed to objectively assess and drive GRC activities.

From a technology perspective, there are four main components required to enable the major steps in Data-Driven GRC methodology:

1. Integrated Risk Assessment

Integrated risk assessment technology maintains the inventory of strategic risks and the assessment of how well they are managed. As the interface of the organization’s most senior professionals into GRC processes, it must be a tool relevant to and usable by executive management. This technology sets the priorities for risk mitigation efforts, thereby driving the development of project plans crafted by each of the functions in the different lines of defense.

2. Project & Controls Management

A project and controls management system (often referred to more narrowly as audit management systems or eGRC systems) enables the establishment of project plans in each risk and control function that map against the risk mitigation efforts identified as required. Projects can then be broken down into actionable sets of tactical level risks, controls that mitigate those risks, and tests that assess those controls.

This becomes the backbone of the organization’s internal control environment and related documentation and evaluation, all setting context for what data is actually required to be tested or monitored in order to meet the organization’s strategic objectives.

3. Risk & Control Analytics

If you think of Integrated Risk Assessment as the brain of the Data-Driven GRC program and the Project & Controls Management component as the backbone, then Risk & Control Analytics are the heart and lungs.

An analytic toolset is critical to reaching out into the organizational environment and acquiring all of the inputs (data) that are required to be aggregated, filtered, and processed in order to route back to the brain for objective decision making. It is important that this toolset be specifically geared toward risk and control analytics so that the filtering and processing functionality is optimized for identifying anomalies representing individual occurrences of risk, while being able to cope with huge populations of data and illustrate trends over time.

4. Knowledge Content

Supporting all of the technology components, knowledge content comes in many forms and provides the specialized knowledge of risks, controls, tests, and data required to perform and automate the methodology across a wide-range of organizational risk areas.

Knowledge content should be acquired in support of individual risk and control objectives and may include items such as:

  • Risk and control templates for addressing specific business processes, problems, or high-level risk areas
  • Integrated compliance frameworks that balance multiple compliance requirements into a single set of implemented and tested controls
  • Data extractors that access specific key corporate systems and extract data sets required for evaluation (e.g., an SAP supported organization may need an extractor that pulls a complete set of fixed asset data from their specific version of SAP that may be used to run all require tests of controls related to fixed assets)
  • Data analysis rule sets (or analytic scripts) that take a specific data set and evaluate what transactions in the data set violate the rules, indicating control failures occurred

Mapping these key technology pieces that make up an integrated risk and control technology platform against the completely integrated Data-Driven GRC methodology looks as follows:

DDGRC

When evaluating technology platforms, it is imperative that each piece of this puzzle directly integrates with the other; otherwise, manual aggregation of results will be required, which is not only laborious but also inconsistent, disorganized and (by definition) violates the Data-Driven GRC methodology.

HiPerfGRC

 

Click here to access ACL’s study

Integrating Finance, Risk and Regulatory Reporting (FRR) through Comprehensive Data Management

Data travels faster than ever, anywhere and all the time. Yet as fast as it moves, it has barely been able to keep up with the expanding agendas of financial supervisors. You might not know it to look at them, but the authorities in Basel, Washington, London, Singapore and other financial and political centers are pretty swift themselves when it comes to devising new requirements for compiling and reporting data. They seem to want nothing less than a renaissance in the way institutions organize and manage their finance, risk and regulatory reporting activities.

The institutions themselves might want the same thing. Some of the business strategies and tactics that made good money for banks before the global financial crisis have become unsustainable and cut into their profitability. More stringent regulator frameworks imposed since the crisis require the implementation of complex, data-intensive stress testing procedures and forecasting models that call for unceasing monitoring and updating. The days of static reports capturing a moment in a firm’s life are gone. One of the most challenging data management burdens is rooted in duplication. The evolution of regulations has left banks with various bespoke databases across five core functions:

  • credit,
  • treasury,
  • profitability analytics,
  • financial reporting
  • and regulatory reporting,

with the same data inevitably appearing and processed in multiple places. This hodgepodge of bespoke marts simultaneously leads to both the duplication of data and processes, and the risk of inconsistencies – which tend to rear their head at inopportune moments (i.e. when consistent data needs to be presented to regulators). For example,

  • credit extracts core loan, customer and credit data;
  • treasury pulls core cash flow data from all instruments;
  • profitability departments pull the same instrument data as credit and treasury and add ledger information for allocations;
  • financial reporting pulls ledgers and some subledgers for reporting;
  • and regulatory reporting pulls the same data yet again to submit reports to regulators per prescribed templates.

The ever-growing list of considerations has compelled firms to revise, continually and on the fly, not just how they manage their data but how they manage their people and basic organizational structures. An effort to integrate activities and foster transparency – in particular through greater cooperation among risk and finance – has emerged across financial services. This often has been in response to demands from regulators, but some of the more enlightened leaders in the industry see it as the most sensible way to comply with supervisory mandates and respond to commercial exigencies, as well. Their ability to do that has been constrained by the variety, frequency and sheer quantity of information sought by regulators, boards and senior executives. But that is beginning to change as a result of new technological capabilities and, at least as important, new management strategies. This is where the convergence of Finance, Risk and Regulatory Reporting (FRR) comes in. The idea behind the FRR theme is that sound regulatory compliance and sound business analytics are manifestations of the same set of processes. Satisfying the demands of supervisory authorities and maximizing profitability and competitiveness in the marketplace involve similar types of analysis, modeling and forecasting. Each is best achieved, therefore, through a comprehensive, collaborative organizational structure that places the key functions of finance, risk and regulatory reporting at its heart.

The glue that binds this entity together and enables it to function as efficiently and cost effectively as possible – financially and in the demands placed on staff – is a similarly comprehensive and unified FRR data management. The right architecture will permit data to be drawn upon from all relevant sources across an organization, including disparate legacy hardware and software accumulated over the years in silos erected for different activities ad geographies. Such an approach will reconcile and integrate this data and present it in a common, consistent, transparent fashion, permitting it to be deployed in the most efficient way within each department and for every analytical and reporting need, internal and external.

The immense demands for data, and for a solution to manage it effectively, have served as a catalyst for a revolutionary development in data management: Regulatory Technology, or RegTech. The definition is somewhat flexible and tends to vary with the motivations of whoever is doing the defining, but RegTech basically is the application of cutting-edge hardware, software, design techniques and services to the idiosyncratic challenges related to financial reporting and compliance. The myriad advances that fall under the RegTech rubric, such as centralized FRR or RegTech data management and analysis, data mapping and data visualization, are helping financial institutions to get out in front of the stringent reporting requirements at last and accomplish their efforts to integrate finance, risk and regulatory reporting duties more fully, easily and creatively.

A note of caution though: While new technologies and new thinking about how to employ them will present opportunities to eliminate weaknesses that are likely to have crept into the current architecture, ferreting out those shortcomings may be tricky because some of them will be so ingrained and pervasive as to be barely recognizable. But it will have to be done to make the most of the systems intended to improve or replace existing ones.

Just what a solution should encompass to enable firms to meet their data management objectives depends on the

  • specifics of its business, including its size and product lines,
  • the jurisdictions in which it operates,
  • its IT budget
  • and the tech it has in place already.

But it should accomplish three main goals:

  1. Improving data lineage by establishing a trail for each piece of information at any stage of processing
  2. Providing a user-friendly view of the different processing step to foster transparency
  3. Working together seamlessly with legacy systems so that implementation takes less time and money and imposes less of a burden on employees.

The two great trends in financial supervision – the rapid rise in data management and reporting requirements, and the demands for greater organizational integration – can be attributed to a single culprit: the lingering silo structure. Fragmentation continues to be supported by such factors as a failure to integrate the systems of component businesses after a merger and the tendency of some firms to find it more sensible, even if it may be more costly and less efficient in the long run, to install new hardware and software whenever a new set of rules comes along. That makes regulators – the people pressing institutions to break down silos in the first place – inadvertently responsible for erecting new barriers.

This bunker mentality – an entrenched system of entrenchment – made it impossible to recognize the massive buildup of credit difficulties that resulted in the global crisis. It took a series of interrelated events to spark the wave of losses and insolvencies that all but brought down the financial system. Each of them might have appeared benign or perhaps ominous but containable when taken individually, and so the occupants of each silo, who could only see a limited number of the warning signs, were oblivious to the extent of the danger. More than a decade has passed since the crisis began, and many new supervisory regimens have been introduced in its aftermath. Yet bankers, regulators and lawmakers still feel the need, with justification, to press institutions to implement greater organizational integration to try to forestall the next meltdown. That shows how deeply embedded the silo system is in the industry.

Data requirements for the development that, knock on wood, will limit the damage from the next crisis – determining what will happen, rather than identifying and explaining what has already happened – are enormous. The same goes for running an institution in a more integrated way. It’s not just more data that’s needed, but more kinds of data and more reliable data. A holistic, coordinated organizational structure, moreover, demands that data be analyzed at a higher level to reconcile the massive quantities and types of information produced within each department. And institutions must do more than compile and sort through all that data. They have to report it to authorities – often quarterly or monthly, sometimes daily and always when something is flagged that could become a problem. Indeed, some data needs to be reported in real time. That is a nearly impossible task for a firm still dominated by silos and highlights the need for genuinely new design and implementation methods that facilitate the seamless integration of finance, risk and regulatory reporting functions. Among the more data-intensive regulatory frameworks introduced or enhanced in recent years are:

  • IFRS 9 Financial Instruments and Current Expected Credit Loss. The respective protocols of the International Accounting Standards Board and Financial Accounting Standards Board may provide the best examples of the forwardthinking approach – and rigorous reporting, data management and compliance procedures – being demanded. The standards call for firms to forecast credit impairments to assets on their books in near real time. The incurred-loss model being replaced merely had banks present bad news after the fact. The number of variables required to make useful forecasts, plus the need for perpetually running estimates that hardly allow a chance to take a breath, make the standards some of the most data-heavy exercises of all.
  • Stress tests here, there and everywhere. Whether for the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) for banks operating in the United States, the Firm Data Submission Framework (FDSF) in Britain or Asset Quality Reviews, the version conducted by the European Banking Authority (EBA) for institutions in the euro zone, stress testing has become more frequent and more free-form, too, with firms encouraged to create stress scenarios they believe fit their risk profiles and the characteristics of their markets. Indeed, the EBA is implementing a policy calling on banks to conduct stress tests as an ongoing risk management procedure and not merely an assessment of conditions at certain discrete moments.
  • Dodd-Frank Wall Street Reform and Consumer Protection Act. The American law expands stress testing to smaller institutions that escape the CCAR. The act also features extensive compliance and reporting procedures for swaps and other over-the-counter derivative contracts.
  • European Market Infrastructure Regulation. Although less broad in scope than Dodd-Frank, EMIR has similar reporting requirements for European institutions regarding OTC derivatives.
  • AnaCredit, Becris and FR Y-14. The European Central Bank project, known formally as the Analytical Credit Dataset, and its Federal Reserve equivalent for American banks, respectively, introduce a step change in the amount and granularity of data that needs to be reported. Information on loans and counterparties must be reported contract by contract under AnaCredit, for example. Adding to the complication and the data demands, the European framework permits national variations, including some with particularly rigorous requirements, such as the Belgian Extended Credit Risk Information System (Becris).
  • MAS 610. The core set of returns that banks file to the Monetary Authority of Singapore are being revised to require information at a far more granular level beginning next year. The number of data elements that firms have to report will rise from about 4,000 to about 300,000.
  • Economic and Financial Statistics Review (EFS). The Australian Prudential Authority’s EFS Review constitutes a wide-ranging update to the regulator’s statistical data collection demands. The sweeping changes include requests for more granular data and new forms in what would be a three-phase implementation spanning two years, requiring parallel and trial periods running through 2019 and beyond.

All of those authorities, all over the world, requiring that much more information present a daunting challenge, but they aren’t the only ones demanding that finance, risk and regulatory reporting staffs raise their games. Boards, senior executives and the real bosses – shareholders – have more stringent requirements of their own for profitability, capital efficiency, safety and competitiveness. Firms need to develop more effective data management and analysis in this cause, too.

The critical role of data management was emphasized and codified in Document 239 of the Basel Committee on Banking Supervision (BCBS), “Principles for Effective Risk Data Aggregation and Risk Reporting.” PERDARR, as it has come to be called in the industry, assigns data management a central position in the global supervisory architecture, and the influence of the 2013 paper can be seen in mandates far and wide. BCBS 239 explicitly linked a bank’s ability to gauge and manage risk with its ability to function as an integrated, cooperative unit rather than a collection of semiautonomous fiefdoms. The process of managing and reporting data, the document makes clear, enforces the link and binds holistic risk assessment to holistic operating practices. The Basel committee’s chief aim was to make sure that institutions got the big picture of their risk profile so as to reveal unhealthy concentrations of exposure that might be obscured by focusing on risk segment by segment. Just in case that idea might escape some executive’s notice, the document mentions the word “aggregate,” in one form or another, 86 times in the 89 ideas, observations, rules and principles it sets forth.

The importance of aggregating risks, and having data management and reporting capabilities that allow firms to do it, is spelled out in the first of these: ‘One of the most significant lessons learned from the global financial crisis that began in 2007 was that banks’ information technology (IT) and data architectures were inadequate to support the broad management of financial risks. Many banks lacked the ability to aggregate risk exposures and identify concentrations quickly and accurately at the bank group level, across business lines and between legal entities. Some banks were unable to manage their risks properly because of weak risk data aggregation capabilities and risk reporting practices. This had severe consequences to the banks themselves and to the stability of the financial system as a whole.’

If risk data management was an idea whose time had come when BCBS 239 was published five years ago, then RegTech should have been the means to implement the idea. RegTech was being touted even then, or soon after, as a set of solutions that would allow banks to increase the quantity and quality of the data they generate, in part because RegTech itself was quantitatively and qualitatively ahead of the hardware and software with which the industry had been making do. There was just one ironic problem: Many of the RegTech solutions on the market at the time were highly specialized and localized products and services from small providers. That encouraged financial institutions to approach data management deficiencies gap by gap, project by project, perpetuating the compartmentalized, siloed thinking that was the scourge of regulators and banks alike after the global crisis. The one-problem-at-a-time approach also displayed to full effect another deficiency of silos: a tendency for work to be duplicated, with several departments each producing the same information, often in different ways and with different results. That is expensive and time consuming, of course, and the inconsistencies that are likely to crop up make the data untrustworthy for regulators and for executives within the firm that are counting on it.

Probably the most critical feature of a well thought-out solution is a dedicated, focused and central FRR data warehouse that can chisel away at the barriers between functions, even at institutions that have been slow to abandon a siloed organizational structure reinforced with legacy systems.

FRR

With :

  • E : Extract
  • L : Load
  • T : Transform Structures
  • C : Calculations
  • A : Aggregation
  • P : Presentation

 

Click here to access Wolters Kluwer’s White Paper

 

 

Front Office Risk Management Technology

A complex tangle of embedded components

Over the past three decades, Front Office Risk Management (FORM) has developed in a piecemeal way. As a result of historical business drivers and the varying needs of teams focused on different products within banks, FORM systems were created for individual business silos, products and trading desks. Typically, different risk components and systems were entwined and embedded within trading systems and transaction processing platforms, and ran on different analytics, trade capture and data management technology. As a result, many banks now have multiple, varied and overlapping FORM systems.

Increasingly, however, FORM systems are emerging as a fully fledged risk solution category, rather than remaining as embedded components inside trading systems or transactional platforms (although those components still exist). For many institutions FORM, along with the frontoffice operating environment, has fundamentally changed following the global financial crisis of 2008. Banks are now dealing with a wider environment of systemically reduced profitability in which cluttered and inefficient operating models are no longer sustainable, and there are strong cost pressures for them to simplify their houses.

Equally, a more stringent and prescriptive regulatory environment is having significant direct and indirect impacts on front-office risk technology. Because of regulators’ intense scrutiny of banks’ capital management, the front office is continuously and far more acutely aware of its capital usage (and cost), and this is having a fundamental impact on the way the systems it uses are evolving. The imperative for risk-adjusted pricing means that traditional trading systems are struggling to cope with the growing importance of and demand for Valuation Adjustment (xVA) systems at scale. Meanwhile, regulations such as the Fundamental Review of the Trading Book (FRTB) will have profound implications for frontoffice risk systems.

As a result of these direct and indirect regulatory pressures, several factors are changing the frontoffice risk technology landscape:

  • The scale and complexity involved in data management.
  • Requirements for more computational power.
  • The imperative for integration and consistency with middle-office risk systems.

Evolving to survive

As banks recognize the need for change, FORM is slowly but steadily evolving. Banks can no longer put off upgrades to systems that were built for a different era, and consensus around the need for a flexible, cross-asset, externalized front-office risk system has emerged.

Over the past few years, most Tier 1 and Tier 2 banks have started working toward the difficult goal of

  • standardizing,
  • consolidating
  • and externalizing

their risk systems, extracting them from trading and transaction processing platforms (if that’s where they existed). These efforts are complicated by the nature of FORM – specifically that it cuts across several functional areas.

Vendors, meanwhile, are struggling with the challenges of meeting the often contradictory nature of front-office demands (such as the need for flexibility vs. scalability). As the frontoffice risk landscape shifts under the weight of all these demand-side changes, many leading vendors have been slow to adapt to the significant competitive challenges. Not only are they dealing with competition from new market entrants with different business models, in many instances they are also playing catch-up with more innovative Tier 1 banks. What’s more, the willingness to experiment and innovate with front-office risk systems is now filtering down to Tier 2s and smaller institutions across the board. Chartis is seeing an increase in ‘build and buy’ hybrid solutions that leverage open-source and open-HPC2 infrastructure.

The rapid development of new technologies is radically altering the dynamics of the market, following several developments:

  • A wave of new, more focused tools.
  • Platforms that leverage popular computational paradigms.
  • Software as a Service (SaaS) risk systems.

More often than not, incumbent vendors are failing to harness the opportunities that these technologies and new open-source languages bring, increasing the risk that they could become irrelevant within the FORM sector. Chartis contends that, as the market develops, the future landscape will be dominated by a combination of agile new entrants and existing players that can successfully transform their current offerings. Vendors have many different strategies in evidence, but the evolution required for them to survive and flourish has only just begun.

With that in mind, we have outlined several recommendations for vendors seeking to stay relevant in the new front-office risk environment:

  • Above all, focus on an open, flexible environment.
  • Create consistent risk data and risk factor frameworks.
  • Develop highly standardized interfaces.
  • Develop matrices and arrays as ‘first-class constructs’.
  • Embrace open-source languages and ecosystems.
  • Consider options such as partnerships and acquisitions to acquire the requisite new skills and technology capabilities in a relatively short period of time.

Chartis

Click here to access Chartis’ Vendor Spotlight Report