Actualités

EIOPA: Digital Transformation Strategy – Promoting sound progress for the benefit of the European Union economy, its citizens and businesses

EIOPA’S DIGITAL TRANSFORMATION STRATEGIC PRIORITIES AND OBJECTIVES

EIOPA’s supervisory and regulatory activities are always underpinned by two overarching objectives:
promoting consumer protection and financial stability. The digital transformation strategy aims at
identifying areas where, in view of these overarching objectives, EIOPA can best commit its
resources in view of the challenges posed by digitalisation
, while at the same time seeking to
identify and remove undue barriers that limit the benefits.

This strategy sits alongside EIOPA’s other forward thinking prioritisation tools –

  • the union-wide strategic supervisory priorities,
  • the Strategy on Cyber Underwriting,
  • the Suptech Strategy

– but its focus is less on the specific actions needed in different areas, and more on how EIOPA will support NCAs and the pensions and insurance sectors in facing digital transformations following a

  • technologically-neutral,
  • future-proof,
  • ethical
  • and secure approach

to financial innovation and digitalisation.

Five key long-term priorities have been identified, which will guide EIOPA’s contributions on
digitalisation topics:

  1. Leveraging on the development of a sound European data ecosystem
  2. Preparing for an increase of Artificial Intelligence while focusing on financial inclusion
  3. Ensuring a forward looking approach to financial stability and resilience
  4. Realising the benefits of the European single market
  5. Enhancing the supervisory capabilities of EIOPA and NCAs.

These five long-term priorities are described in the following sections. Each relates to areas where
work is already underway or planned, whether at national or European level, by EIOPA or other
European bodies.

The aim is to focus on priority areas where EIOPA can add value so as to enhance synergies and
improve overall convergence and efficiency in our response as a supervisory community to the
digital transformation.

LEVERAGING ON THE DEVELOPMENT OF A SOUND EUROPEAN DATA ECO-SYSTEM
ACCOMPANYING THE DEVELOPMENT OF AN OPEN FINANCE AND OPEN INSURANCE FRAMEWORK
Trends in the market show that the exchange of both personal and non-personal data through
Application Programming Interfaces (APIs) is a leading factor leading to transformation and
integration in the financial sector
. By enabling several stakeholders to “plug” to an API to have access
to timely and standardised data, insurance undertakings in collaboration with other service providers can timely and adequately assess the needs of consumers and develop innovative and convenient proposals for them. Indeed, there are multiple types of use cases that can be developed as a result of enhanced accessing and sharing of data in insurance.

Examples of potential use cases include pension tracking systems (see further below), public and
private comparison websites,
or different forms of embedding insurance (including micro
insurances) in the channels of other actors
(retailers, airlines, car sharing applications, etc.).

Another use case could consist in allowing consumers to conveniently access information about their
insurance products from different providers in an integrated platform / application
and identify any
protection gaps (or overlaps) in coverage that they may have.

In addition to having access to a greater variety of products and services and enabling consumers
to make more informed decisions, the transfer of insurance-related data seamlessly from one
provider to another in real-time (data portability)
could facilitate switching and enhance
competition in the market
.

Supervisory authorities could also potentially connect into the relevant APIs to access anonymised market data so as to develop more pre-emptive and evidence-based supervision and regulation.

However, it is also important to take into account relevant risks such those linked to data

  • quality,
  • breaches
  • and misuse.

ICT/cyber risks and financial inclusion risks are also relevant, as well as issues related to a level playing field and data reciprocity.

EIOPA considers that, if the risks are handled right, several open insurance use cases can have
significant benefits for consumers
, for the sector and its supervision and will use the findings of
its recent public consultation on this topic to collaborate with the European Commission on the
development of the financial data space and/or open finance initiatives respectively foreseen in
the Commission’s Data Strategy and Digital Finance Strategy, possibly focusing on specific use
cases.

ADVISING ON THE DEVELOPMENT OF PENSIONS DATA TRACKING SYSTEMS IN THE EU
European public pension systems are facing the dual challenge of remaining financially sustainable
in an aging society and being able to provide Europeans with an adequate income in retirement.
Hence, the relevance of supplementary occupational and personal pension systems is increasing.
The latter are also seeing a major trend influenced by the low interest environment consisting on
the shift from Defined Benefit (DB) plans, which guarantee citizens a certain income after
retirement, to Defined Contribution (DC) plans, where retirement income depends on how the
accumulated contributions have been invested. As a consequence of these developments, more
responsibility and financial risks are placed on individual citizens for planning for their income after
retirement.

In this context, Pensions Tracking Systems (PTS) can provide simple and understandable information
to the average citizen about his or her pension savings in an aggregated manner
, typically
conveniently accessible via digital channels. PTS are linked to the concept of Open Finance, since
different providers of statutory and private pensions share pension data in a standardised manner
so that it can be aggregated so as to provide consumers with relevant information for adopting
informed decisions about their retirement planning.

EIOPA considers that it is increasingly important to provide consumers with adequate information
to make informed decisions about their retirement planning
, as it is reflected in EIOPA’s technical
advice to the European Commission on best practices for the development of Pension Tracking
Systems. EIOPA remains ready to further assist on this area, as relevant.

TRANSITIONING TOWARDS A SUSTAINABLE ECONOMY WITH THE HELP OF DATA AND TECHNOLOGY
Technologies such as

  • AI,
  • Blockchain,
  • or the Internet of Things

can assist European insurance undertakings and pension schemes in the implementation of more sustainable business models and investments.

For example, greater insights provided by new datasets (e.g. satellite images or images taken by drones) combined with more granular AI systems may allow to better assess climate change-related risks and provide advanced insurance coverage. Indeed, as highlighted by the Commission’s strategy on adaptation to climate change, actions aimed to adapt to climate change should be informed by more and better data on climate-related risks and losses accessible to everyone as well as relevant risks assessment tools.

This would allow insurance undertakings to contribute to a wider inclusion by incentivising
customers to mitigate risks via policies whose pricing and contractual terms are based on effective
measurements
, e.g. with the use of telematics-based solutions in home insurance. However, there
are also concerns about the impact on the affordability and availability of insurance for certain
consumers
(e.g. consumers living in areas highly exposed to flooding) as well as regarding the
environmental impact of some technologies, notably concerning the energy consumption of certain
data centres and crypto-assets.

Promoting a sustainable economy is a core priority for EIOPA. For this purpose, EIOPA will
specifically develop a Sustainable Finance Action Plan highlighting, among other things, the
importance of improving the accessibility and availability of data and models on climate-related
risks and insured losses
and the role that EIOPA can play therein, as highlighted by the
Commission’s strategy on adaptation to climate change and in line with the Green deal data space
foreseen in the Commission’s Data Strategy.


PREPARING FOR AN INCREASE OF ARTIFICIAL INTELLIGENCE WHILE FOCUSING ON FINANCIAL INCLUSION
TOWARDS AN ETHICAL AND TRUSWORTHY ARTIFICIAL INTELLIGENCE IN THE EUROPEAN INSURANCE SECTOR
The take-up of AI in all the areas of the insurance value chain raises specific opportunities and
challenges; the variety of use cases is fast moving, while the technical, ethical and supervisory issues
thrown up in ensuring appropriate governance, oversight, and transparency are wide ranging.
Indeed, while the benefits of AI in terms of prediction accuracy, cost efficiency and automation are
very relevant, the challenges raised by

  • the limited explainability of some AI systems
  • and the potential impact on some AI use cases on the fair treatment of consumers and the financial inclusion of vulnerable consumers and protected classes

is also significant.

A coordinated and coherent approach across markets, insurance undertakings and intermediaries,
and between supervisors is therefore of particular importance, also given the potential costs of
addressing divergences in the future. EIOPA acknowledges that AI can play a pivotal role in the digital transformation of the insurance and pension markets in the years to come and therefore the importance of establishing adequate governance frameworks to ensure ethical and trustworthy AI systems. EIOPA will seek to leverage the AI governance principles recently developed by its consultative expert group on digital ethics, to develop further sectorial work on specific AI use cases in insurance.

PROMOTING FINANCIAL INCLUSION IN THE DIGITAL AGE
On the one hand, new technologies and business models could be used to improve the financial
inclusion of European citizens. For example, young drivers using telematics devices installed in their
cars or diabetes patients using health wearable devices reportedly have access to more affordable
insurance products
. In addition to the incentives arising from advanced risk-based pricing, insurance
undertakings could provide consumers loss prevention / risk mitigation services (e.g. suggestions to
drive safely or to adopt healthier lifestyles) to help them understand and mitigate their risk
exposure
.

From a different perspective, digital communication channels, new identity solutions and
onboarding options could also facilitate access to insurance to certain customer segments
.
On the other hand, certain categories of consumers or consumers not willing to share personal data
could encounter difficulties in accessing affordable insurance as a result of more granular risk
assessments. This would be for instance the case of consumers having difficulties to access
affordable flood insurance as a result detailed risk-based pricing enabled by satellite imagery
processed by AI systems. In addition,

  • other groups of potentially vulnerable consumers deserve special attention due to their personal characteristics (e.g. elderly people or in poverty),
  • life-time events (e.g. car accident),
  • health conditions (e.g. undergoing therapy)
  • or people with difficulties to access digital services.

Furthermore, the trend towards increasingly data-driven business models can be compromised if adequate governance measures are not put in place to deal with biases in datasets used in order to avoid discriminatory outcomes.

EIOPA will assess the topic of financial inclusion from a broader perspective i.e. not only from a
digitalisation angle, seeking to promote the fair and ethical treatment of consumers, in particular
in front-desk applications and in insurance lines of businesses that are particularly important due
to their social impact.

EIOPA will routinely assess its consumer protection supervisory and policy work in view of
impacts on financial inclusion, and ensuring its work on digitalisation takes into account
accessibility or inclusion impacts.

ENSURING A FORWARD LOOKING APPROACH TO FINANCIAL STABILITY AND RESILIENCE
ENSURING A RESILIENT AND SECURE DIGITALISATION
Similar to other sectors of the economy, incumbent undertakings as well as InsurTech start-ups
increasingly rely on information and communication technology (ICT) systems in the provision of
insurance and pensions services
. Among other benefits, the increasing adoption of innovative ICT
allow undertakings to implement more efficient processes and reduce operational costs, enable
data tracking and data backups in case of incidents
, as well as greater accessibility and collaboration
within the organisation
(e.g. via cloud computing systems).

However, undertakings’ operations are also increasingly vulnerable to ICT security incidents,
including cyberattacks
. Furthermore, the complexity of some ICT or a different governance applied
to new technologies (e.g. cloud computing) is increasing as well as the frequency of ICT related
incidents (e.g. cyber incidents), which can have a considerable impact on undertakings’ operational
functioning
. Moreover, relevance of larger ICT service providers could also lead to concentration
and contagion risks
. Supervisory authorities need to take into account these developments and
adapt their supervisory skills and competences accordingly.

Early on, EIOPA identified cyber security and ICT resilience as a key policy priority and in the years to come will focus on the implementation of those priorities, including the recently adopted cloud computing and ICT guidelines, and on the upcoming implementation of the Digital Operational Resilience Act (DORA).

ASSESSING THE PRUDENTIAL FRAMEWORK IN THE LIGHT OF DIGITALISATION
The Solvency II Directive sets out requirements applicable to insurance and reinsurance undertakings in the EU with the aim to ensure their financial soundness and provide adequate protection to policyholders and beneficiaries. The Solvency II Directive follows a proportional, risk-based and technology-neutral approach and therefore it remains fully relevant in the context of digitalisation. Under this approach, all undertakings, including start-ups that wish to obtain a licence to benefit from Solvency II’s pass-porting rights to access the Internal Market via digital (and non-digital) distribution channels need to meet the requirements foreseen in the Directive, including minimal capital.

A prudential evaluation respective digital transformation processes should consider that insurance undertakings are incurring in high IT-related costs, to be appropriately reflected in their balance sheet. Furthermore, Solvency II requirement on outsourcing and the system of governance requirements are also relevant, in light of the increasing collaboration with third-party service providers (including BigTechs) and the use of new technologies such as AI. Investments on novel assets such as crypto-assets as well as the trend towards the “platformisation” of the economy are also relevant from a prudential perspective and the type of activities developed by insurance undertakings.

EIOPA considers that it is important to assess the prudential framework in light of the digital transformation that is taking place in the sector, seeking to ensure its financial soundness, promote greater supervisory convergence and also assess whether digital activities and related risks are adequately captured and if there are any undue regulatory barriers to digitalisation in this area.

REALISING THE BENEFITS OF THE EUROPEAN SINGLE MARKET
SUPPORTING THE DIGITAL SINGLE MARKET FOR INSURANCE AND PENSION PRODUCTS
Digital distribution can readily cross borders and reduce linguistic and other barriers; economies of scale linked to offering products to a wider market, increased competition, and greater variety of products and services for consumers are some of the benefits arising from the European Internal Market.

However, the scaling up the scope and speed of distribution of products and services across the Internal Market is an area where there is still a major untapped potential. Indeed, while legislative initiatives such as the

  • Insurance Distribution Directive (IDD),
  • Solvency II Directive,
  • Packaged Retail and Insurance-based Investment Products (PRIIPs) Regulation,
  • or the Directive on the activities and supervision of institutions for occupational retirement provision (IORP II)16

have made considerable progress towards the convergence of national regimes in Europe, considerable supervisory and regulatory divergences still persist amongst EU Member States.

For example, the IDD is a minimum harmonisation Directive. Existing regulation does not always allows for a fully digital approach. For instance, the need to use non-digital signatures or paper-based requirements as established by Article 23 (1) (a) IDD and Article 14 (2) (a) PRIIPs Regulation can limit end-to-end digital workflows. It is critical that the opportunities – and risks, for instance in relation to financial inclusion and accessibility – that come with digital transformations are fully integrated into future policy work. In this context, the so-called 28th regime used in Regulation on a pan-European Personal Pension Product (PEPP)17, which does not replace or harmonise national systems but coexists with them, is an approach that could eventually be explored taking into account the lessons learned.

EIOPA supports the development of the Internal Market in times of transformation, through the recalibration where needed of the IDD, Solvency II, PRIIPS and IORP II from a digital single market
perspective
. EIOPA will also explore what a digital single market for insurance might look like from
a regulatory and supervisory perspective. Furthermore, EIOPA will integrate a digital ‘sense check’
into all of its policy work
, where relevant.

SUPPORTING INNOVATION FACILITATORS IN EUROPE
In recent years many NCAsin the EU have adopted initiatives to facilitate financial innovation. These
initiatives include the establishment of innovation facilitators such as ‘innovation hubs’ and ‘regulatory sandboxes’ to exchange views and experience concerning Fintech-related regulatory issues and enable the testing and development of innovative solutions in a controlled environment and to learn more as to supervisory expectations. These initiatives also allow supervisory authorities to gather a better understanding of the new technologies and business models taking place in the market.

At European level, the European Forum for Innovation Facilitators (EFIF), created in 2019, has
become an important forum where European supervisors share experiences from their national
innovation facilitators and discuss with stakeholders topics such as Artificial Intelligence,
Platformisation, RegTech or crypto-assets
. The EFIF will soon be complemented with the Commission’s Digital Finance platform; a new digital interface where stakeholders of the digital
finance ecosystem will be able to interact.

Innovation facilitators can play a key role in the implementation and adoption of innovative
technologies and business models in Europe and EIOPA will continue to support them through its
work in the EFIF and the upcoming Digital Finance Platform. EIOPA will work to further facilitate
cross-border / cross-sector cooperation and information exchanges on emergent business models.

ADDRESSING THE OPPORTUNITIES AND CHALLENGES OF FRAGMENTED VALUE CHAINS AND THE PLATFORM ECONOMY
New actors including InsurTech start-ups and BigTech companies are entering the insurance market,
both as competitors as well as cooperation partners of incumbent insurance undertakings.

Concerning the latter, incumbent undertakings reportedly increasingly revert to third-party service
providers to gain quick and efficient access to new technologies and business models
. For example,
based on in EIOPA’s Big Data Analytics thematic review, while the majority of the participating
insurance undertakings using BDA solutions in the area of claims management developed these
tools in-house, two thirds of the undertakings reverted to outsourcing arrangements in order to
implement AI-powered chatbots
.

This trend is reinforced by the platformisation of the economy, which in the insurance sector goes
beyond traditional comparison websites and is reflected in the development of complex ecosystems
integrating different stakeholders
. They often share data via Application Programming Interfaces
(APIs) and cooperate in the distribution of insurance products via platforms (including those of BigTechs) embedded (bundled) with other financial and non-financial services. In addition, in a
broader context of Decentralised Finance (DEFI), Peer-to-Peer (P2P) insurance business models
using digital platforms and different levels of decentralisation to interact with members with similar
risks profiles have also emerged in several jurisdiction; although their significance in terms of gross
written premiums is very limited to date, it is a matter that needs to be monitored.

EIOPA notes the opportunities and challenges arising from increasingly fragmented value chains and the platformisation of economy which will be reflected in the ESAs upcoming technical advice on digital finance to the European Commission, and will subsequently support any measures within its remit that may be needed to

  • encourage innovation and competition,
  • protect consumers,
  • safeguard financial stability
  • and ensure a level playing field.

ENHANCING THE SUPERVISORY CAPABILITIES OF EIOPA AND NCAS
LEVERAGING ON TECHNOLOGY AND DATA FOR MORE EFFICIENT SUPERVISION AND REGULATORY COMPLIANCE
Digital technologies can also help supervisors to implement more agile and efficient supervisory
processes (commonly known as Suptech)
. They can support a continuous improvement of internal
processes as well as business intelligence capabilities, including enhancing the analytical framework
, the development of risk assessments and the publication of statistics. This can also include new capabilities for identifying and assessing conduct risks.

With its European perspective, EIOPA can play a key role by enhancing NCAs data analysis capabilities based on extensive and rich datasets and appropriate processing tools.

As outlined in its SupTech strategy and Data and IT strategy, EIOPA has the objective to promote its own transformation to become a digital, user-focused and data driven organisation that meets its strategic objectives effectively and efficiently. Several on-going projects are already in place to achieve this objective.

INCREASING THE UNDERSTANDING OF NEW TECHNOLOGIES BY SUPERVISORS IN CLOSE COOPERATION WITH STAKEHOLDERS
Building supervisory capacity and convergence is a critical enabler for other benefits of digitalisation; without strong and convergent supervision, other benefits may be compromised. With the use of different tools available (innovation hubs, regulatory sandboxes, market monitoring, public consultations, desk-based reports etc.), supervisors seek to understand, engage and supervise increasingly technology-driven undertakings.

Closely cooperating with stakeholders with hands-on experience on the use of innovative tools has proofed to be useful tool to improve the knowledge by supervisors, and also for the stakeholders it is important to understand what are the supervisory expectations.

Certainly, the profile of the supervisors needs to evolve and they need to extend their knowledge into new areas and understand how new business models and value chains may impact undertakings and intermediaries both from a conduct and from a prudential perspective. Moreover, in view of the growing importance of new technologies and business models for insurance undertakings and pensions schemes, it is important to ensure that supervisors have access to relevant data about these developments in order to enable an evidence-based supervision.

EIOPA aims to continue incentivising the sharing of knowledge and experience amongst NCAs by organising InsurTech roundtables, workshops and seminars for supervisors as well as pursuing further potential deep-dive analysis on certain financial innovation topics. EIOPA will also further emphasise an evidence-based supervisory approach by developing a regular collection of harmonised data on digitalisation topics. EIOPA will also develop a stakeholder engagement strategy on digitalisation topics to identify those actors and areas where the cooperation should be reinforced.

Achieving Effective IFRS 17 Reporting – Enabling the right accounting policy through technology

Executive summary

International Financial Reporting Standard (IFRS) 17, the first comprehensive global accounting standard for insurance products, is due to be implemented in 2023, and is the latest standard developed by the International Accounting Standards Board (IASB) in its push for international accounting standards.

IFRS 17, following other standards such as IFRS 9 and Current Expected Credit Losses (CECL), is the latest move toward ‘risk-ware accounting’, a framework that aims to incorporate financial and non-financial risk into accounting valuation.

As a principles-based standard, IFRS 17 provides room for different interpretations, meaning that insurers have choices to make about how to comply. The explicit integration of financial and non-financial risk has caused much discussion about the unprecedented and distinctive modeling challenges that IFRS 17 presents. These could cause ‘tunnel vision’ among insurers when it comes to how they approach compliance.

But all stages of IFRS 17 compliance are important, and each raises distinct challenges. By focusing their efforts on any one aspect of the full compliance value chain, insurers can risk failing to adequately comply. In the case of IFRS 17, it is not necessarily accidental non-compliance that is at stake, but rather the sub-optimal presentation of the business’ profits.

To achieve ‘ideal’ compliance, firms need to focus on the logistics of reporting as much as on the mechanics of modeling. Effective and efficient reporting comprises two elements: presentation and disclosure. Reporting is the culmination of the entire compliance value chain, and decisions made further up the chain can have a significant impact on the way that value is presented. Good reporting is achieved through a mixture of technology and accounting policy, and firms should follow several strategies in achieving this:

  • Anticipate how the different IFRS 17 measurement models will affect balance sheet volatility.
  • Understand the different options for disclosure, and which approach is best for specific institutional needs.
  • Streamline IFRS 17 reporting with other reporting duties.
  • Where possible, aim for collaborative report generation while maintaining data integrity.
  • Explore and implement technology that can service IFRS 17’s technical requirements for financial reporting.
  • Store and track data on a unified platform.

In this report we focus on the challenges associated with IFRS 17 reporting, and consider solutions to those challenges from the perspectives of accounting policy and technology implementation. And in highlighting the reporting stage of IFRS 17 compliance, we focus specifically on how decisions about the presentation of data can dictate the character of final disclosure.

  • Introduction: more than modeling

IFRS 17 compliance necessitates repeated stochastic calculations to capture financial and nonfinancial risk (especially in the case of long-term insurance contracts). Insurance firms consistently identify modeling and data management as the challenges they most anticipate having to address in their efforts to comply. Much of the conversation and ‘buzz’ surrounding IFRS 17 has therefore centered on its modeling requirements, and in particular the contractual service margin (CSM) calculation.

But there is always a danger that firms will get lost in the complexity of compliance and forget the aim of IFRS 17. Although complying with IFRS 17 involves multiple disparate process elements and activities, it is still essentially an accounting
standard
. First and foremost its aim is to ensure the transparent and comparable disclosure of the value of insurance services.
So while IFRS 17 calculations are crucial, they are just one stage in the compliance process, and ultimately enable the intended outcome: reporting.

Complying with the modeling requirements of IFRS 17 should not create ‘compliance tunnel vision’ at the expense of the presentation and disclosure of results. Rather, presentation and disclosure are the culmination of the IFRS 17 compliance process flow and are key elements of effective reporting (see Figure 1).

  • Developing an IFRS 17 accounting policy

A key step in developing reporting compliance is having an accounting policy tailored to a firm’s specific interaction with IFRS 17. Firms have decisions to make about how to comply, together with considerations of the knock-on effects IFRS 17 will have on the presentation of their comprehensive statements of income.

There are a variety of considerations: in some areas IFRS 17 affords a degree of flexibility; in others it does not. Areas that will substantially affect the appearance of firms’ profits are:

• The up-front recognition of loss and the amortization of profit.
• The new unit of account.
• The separation of investment components from insurance services.
• The recognition of interest rate changes under the general measurement model (GMM).
Deferred acquisition costs under the premium allocation approach (PAA).

As a principles-based standard, IFRS 17 affords a degree of flexibility in how firms approach valuation. One of its aims is to insure that entity specific risks and diverse contract features are adequately reflected in valuations, while still safeguarding reporting comparability. This flexibility also gives firms some degree of control over the way that value and risk are portrayed in financial statements. However, some IFRS 17 stipulations will lead to inevitable accounting mismatches and balance-sheet volatility.

Accounting policy impacts and choices – Balance sheet volatility

One unintended consequence of IFRS 17 compliance is balance sheet volatility. As an occurrence of risk-aware accounting, IFRS 17 requires the value of insurance services to be market-adjusted. This adjustment is based on a firm’s projection of future cash flow, informed by calculated financial risk. Moreover, although this will not be the first time firms are incorporating non-financial risk into valuations, it is the first time it has to be explicit.

Market volatility will be reflected in the balance sheet, as liabilities and assets are subject to interest rate fluctuation and other financial risks. The way financial risk is incorporated into the value of a contract can also contribute to balance sheet volatility. The way it is incorporated is dictated by the measurement model used to value it, which depends on the eligibility of the contract.

There are three measurement models, the PAA, the GMM and the variable fee approach (VFA). All three are considered in the next section.

The three measurement models

Features of the three measurement models (see Figure 2) can have significant effects on how profit – represented by the CSM – is presented and ultimately disclosed.

To illustrate the choices around accounting policy that insurance firms will need to consider and make, we provide two specific examples, for the PAA and the GMM.

Accounting policy choices: the PAA

When applying the PAA to shorter contracts – generally those of fewer than 12 months – firms have several choices to make about accounting policy. One is whether to defer acquisition costs. Unlike previous reporting regimes, under IFRS17’s PAA indirect costs cannot be deferred as acquisition costs. Firms can either expense these costs upfront or defer them and amortize the cost over the length of the contract. Expensing acquisition costs as they are incurred may affect whether a group of contracts is characterized as onerous at inception. Deferring acquisition costs reduces the liability for the remaining coverage; however, it may also increase the loss recognized in the income statement for onerous contracts.

Accounting policy choices: the GMM

Under IFRS 17, revenue is the sum of

  • the release of CSM,
  • changes in the risk adjustment,
  • and expected net cash outflows, excluding any investment components.

Excluding any investment component from revenue recognition will have significant impacts on contracts being sold by life insurers.

Contracts without direct participation features measured under the GMM use a locked-in discount rate – whether this is calculated ‘top down’ or ‘bottom up’ is at the discretion of the firm. Changes to the CSM have to be made using the discount rate set at the initial recognition of the contract. Changes in financial variables that differ from the locked-in discount rate cannot be integrated into the CSM, so appear as insurance service value.

A firm must account for the changes directly in the comprehensive income statement, and this can also contribute to balance sheet volatility.

As part of their accounting policy firms have a choice about how to recognize changes in discount rates and other changes to financial risk assumptions – between other comprehensive income (OCI) and profit and loss (P&L). Recognizing fluctuations in discount rates and financial risk in the OCI reduces some volatility in P&L. Firms also recognize the fair value of assets
in the OCI under IFRS 9.

  • The technology perspective

Data integrity and control

At the center of IFRS 17 compliance and reporting is the management of a wide spectrum of data – firms will have to gather and generate data from historic, current and forward-looking perspectives.

Creating IFRS 17 reports will be a non-linear process, and data will be incorporated as it becomes available from multiple sources. For many firms, contending with this level of data granularity and volume will be a big leap from other reporting requirements. The maturity of an insurer’s data infrastructure is partly defined by the regulatory and reporting context it was built in, and in which it operates – entities across the board will have to upgrade their data management technology.

In regions such as Southeast Asia and the Middle East, however, data management on the scale of IFRS 17 is unprecedented. Entities operating in these regions in particular will have to expend considerable effort to upgrade their infrastructure. Manual spreadsheets and complex legacy systems will have to be replaced with data management technology across the compliance value chain.

According to a 2018 survey by Deloitte, 87% of insurers believed that their systems technology required upgrades to capture the new data they have to handle and perform the calculations they require for compliance. Capturing data inputs was cited as the biggest technology challenge.

Tracking and linking the data lifecycle

Compliance with IFRS 17 demands data governance across the entire insurance contract valuation process. The data journey starts at the data source and travels through aggregation and modeling processes all the way to the disclosure stage (see Figure 3).

In this section we focus on the specific areas of data lineage, data tracking and the auditing processes that run along the entire data compliance value chain. For contracts longer than 12 months, the valuation process will be iterative, as data is transformed multiple times by different users. Having a single version of reporting data makes it easier to collaborate, track and manage the iterative process of adapting to IFRS 17. Cloud platforms help to address this challenge, providing an effective means of storing and managing the large volumes of reporting data generated by IFRS 17. The cloud allows highly scalable, flexible technology to be delivered on demand, enabling simultaneous access to the same data for internal teams and external advisors.

It is essential that amendments are tracked and stored as data falls through different hands and passes through different IFRS 17 ‘compliance stages’. Data lineage processes can systematically track users’ interactions with data and improve the ‘auditability’ of the compliance process and users’ ‘ownership’ of activity.

Data linking is another method of managing IFRS 17 reporting data. Data linking contributes to data integrity while enabling multiple users to make changes to data. It enables the creation of relationships across values while maintaining the integrity of the source value, so changing the source value creates corresponding changes across all linked values. Data linking also enables the automated movement of data from spreadsheets to financial reports, updating data as it is changed and tracking users’ changes to it.

Disclosing the data

Highlighting how IFRS 17 is more than just a compliance exercise, it will have a fundamental impact on how insurance companies report their data internally, to regulators, and to financial markets. For the final stage of compliance, firms will need to adopt a new format for the balance sheet, P&L statement and cash flow statements.

In addition to the standard preparation of financial statements, IFRS 17 will require a number of disclosures, including the explanation of recognized amounts, significant judgements made in applying IFRS 17, and the nature and extent of risks arising from insurance contracts. As part of their conversion to IFRS 17, firms will need to assess how data will have to be managed on a variety of levels, including

  • transactions,
  • financial statements,
  • regulatory disclosures,
  • internal key performance indicators
  • and communications to financial markets.

Communication with capital markets will be more complex, because of changes that will have to be made in several areas:

  • The presentation of financial results.
  • Explanations of how calculations were made, and around the increased complexity of the calculations.
  • Footnotes to explain how data is being reported in ‘before’ and ‘after’ conversion scenarios.

During their transition, organizations will have to report and explain to the investor community which changes were the result of business performance and which were the result of a change in accounting basis. The new reporting basis will also impact how data will be reported internally, as well as overall effects on performance management. The current set of key metrics used for performance purposes, including volume, revenue, risk and profitability, will have to be adjusted for the new methodology and accounting basis. This could affect how data will be reported on and reconciled for current regulatory reporting requirements including Solvency II, local solvency standards, and broader statutory and tax reporting.

IFRS 17 will drive significant changes in the current reporting environment. To address this challenge, firms must plan how they will manage both the pre-conversion and post-conversion data sets, the preparation of pre-, post-, and comparative financial statements, and the process of capturing and disclosing all of the narrative that will support and explain these financial results.

In addition, in managing the complexity of the numbers and the narrative before, during and after the conversion, reporting systems will also need to scale to meet the requirements of regulatory reporting – including disclosure in eXtensible Business
Reporting Language (XBRL) in some jurisdictions. XBRL is a global reporting markup language that enables the encoding of documents in a human and machine-legible format for business reporting (The IASB publishes its IFRS Taxonomy files in
XBRL).

But XBRL tagging can be a complex, time-consuming and repetitive process, and firms should consider using available technology partners to support the tagging and mapping demands of document drafting.

A Practical Guide to Analytics and AI in the Cloud With Legacy Data

Introduction

Businesses that use legacy data sources such as mainframe have invested heavily in building a reliable data platform. At the same time, these enterprises want to move data into the cloud for the latest in analytics, data science and machine learning.

The Importance of Legacy Data

Mainframe is still the processing backbone for many organizations, constantly generating important business data.

It’s crucial to consider the following:

MAINFRAME IS THE ENTERPRISE TRANSACTION ENVIRONMENT

In 2019, there was a 55% increase in transaction volume on mainframe environments. Studies estimate that 2.5 billion transactions are run per day, per legacy system across the world.

LEGACY IS THE FUEL BEHIND CUSTOMER EXPERIENCES

Within industries such as financial services and insurance, most customer information lives on legacy systems. Over 70% of enterprises say their customer-facing applications are completely or very reliant on mainframe processing.

BUSINESS-CRITICAL APPLICATIONS RUN ON LEGACY SYSTEMS

Mainframe often holds business-critical information and applications — from credit card transactions to claims processing. For over half of enterprises with a mainframe, they run more than half of their business-critical applications on the platform.

However, they also present a limitation for an organization in its analytics and data science journey. While moving everything to the cloud may not be the answer, identifying ways in which you can start a legacy modernization process is crucial to the next generation of data and AI initiatives.

The Cost of Legacy Data

Across the enterprise, legacy systems such as mainframe serve as a critical piece of infrastructure that is ripe with opportunity for integration with modern analytics platforms. If a modern analytics platform is only as good as the data fed into it, that means enterprises must include all data sources for success. However, many complexities can occur when organizations look to build the data integration pipelines between their modern analytics platform and legacy sources. As a result, the plans made to connect these two areas are often easier said than done.

DATA SILOS HINDER INNOVATION

Over 60% of IT professionals with legacy and modern technology in house are finding that data silos are negatively affecting their business. As data volumes increase, IT can no longer rely on current data integration approaches to solve their silo challenges.

CLOUDY BUSINESS INSIGHTS

Business demands that more decisions are driven by data. Still, few IT professionals who work with legacy systems feel they are successful in delivering data insights that reside outside their immediate department. Data-driven insights will be the key to competitive success. The inability to provide insights puts a business at risk.

SKILLS GAP WIDENS

While it may be difficult to find skills for the latest technology, it’s becoming even harder to find skills for legacy platforms. Enterprises have only replaced 37% of the mainframe workforce lost over the past five years. As a result, the knowledge needed to integrate mainframe data into analytics platforms is disappearing. While the drive for building a modern analytics platform is more powerful than ever, taking this initiative and improving data integration practices that encompass all enterprise data has never been more challenging.

The success of building a modern analytics platform hinges on understanding the common challenges of integrating legacy data sources and choosing the right technologies that can scale with the changing needs of your organization.

Challenges Specific to Extracting Mainframe Data

With so much valuable data on mainframe, the most logical thing to do would be to connect these legacy data sources to a modern data platform. However, many complexities can occur when organizations begin to build integration pipelines to legacy sources. As a result, the plans made to connect these two areas are often easier said than done. Shared challenges of extracting mainframe data for integration with modern analytics platforms include the following:

DATA STRUCTURE

It’s common for legacy data not to be readily compatible with downstream analytics platforms, open-source frameworks and data formats. The varied structures of legacy data sources differ from relational data. Legacy data sources have traits such as

  • hierarchical tables,
  • embedded headers,
  • and trailer and complex data structures (e.g., nested, repeated or redefined elements).

With the incorrect COBOL redefines and logic set up at the start of a data integration workflow, legacy data structures risk slowing down processing speeds to the point of business disruption and can lead to incorrect data for downstream consumption.

METADATA

COBOL copybooks can be a massive hurdle to overcome for integrating mainframe data. COBOL copybooks are the metadata blocks that define the physical layout of data but are stored separately from that data. As a result, they can be quite complicated, containing not just formatting information, but also logic in the form, for example, of nested Occurs Depending On clauses. For many mainframe files, hundreds of copybooks may map to a single file. Feeding mainframe data directly into an analytics platform can result in significant data confusion.

DATA MAPPING

Unlike an RDBMS, which needs data to be entered into a table or column, nothing enforces a set data structure on the mainframe. COBOL copybooks are incredibly flexible so that they

  • can group multiple pieces into one,
  • or subdivide a field into various fields,
  • or ignore whole sections of a record.

As a result, data mapping issues will arise. The copybooks reflect the needs of the program, not the needs of a data-driven view.

DIFFERENT STORAGE FORMATS

Often numeric values stored one way on a mainframe are stored differently when the data is moving to the cloud. Additionally, mainframes use a whole different encoding scheme (EBCDIC vs. ASCII) — it’s an 8-bit structure vs. a 7-bit structure. As a result, multiple numeric encoding schemes allow for the ability to “pack” numbers into less storage (e.g., packed decimal) space. In addition to complex storage formats, there are techniques to use each individual bit to store data.

Whether it’s a lack of internal knowledge on how to handle legacy data or a rigid data framework, ignoring legacy data when building a modern data analytics platform means missing valuable information that can enhance any analytics project.

Pain Points of Building a Modern Analytics Platform

Tackling the challenges of mainframe data integration is no simple task. Besides determining the best approach for integrating these legacy data sources, IT departments are also dealing with the everyday challenges of running a department. Regardless of the size of an organization, there are daily struggles everyone faces, from siloed data to lack of IT skills.

ENVIRONMENT COMPLEXITY

Many organizations have adopted hybrid and multi-cloud strategies to

  • manage data proliferation,
  • gain flexibility,
  • reduce costs
  • and increase capacities.

Cloud storage and the lakehouse architecture offer new ways to manage and store data. However, organizations still need to maintain and integrate their mainframes and other on-premises systems — resulting in a challenging integration strategy that must encompass a variety of environments.

SILOED DATA

The increase in data silos adds further complexity to growing data volumes. Data silo creation happens as a direct result of increasing data sources. Research has shown that data silos have directly inhibited the success of analytics and machine learning projects.

PERFORMANCE

Processing the requirements of growing data volumes can cause a slowdown in a data stream. Loading hundreds, or even thousands, of database tables into a big data platform — combined with an inefficient use of system resources — can create a data bottleneck that hampers the performance of data integration pipelines.

DATA QUALITY

Industry studies have shown that up to 90% of a data scientist’s time is getting data to the right condition for use in analytics. In other words, 90% of the time, data feeding analytics cannot be trusted. Data quality processes that include

  • mapping,
  • matching,
  • linking,
  • merging,
  • deduplication
  • and actionable data

are critical to providing frameworks with trusted data.

DATA TYPES AND FORMATS

Valuable data for analytics comes from a range of sources across the organization from CRM, ERPs, mainframes and online transaction processing systems. However, as organizations rely on more systems, the data types and formats continue to grow.

IT now has the challenge of making big data, NoSQL and unstructured data all readable for downstream analytics solutions.

SKILLS GAP AND RESOURCES

The need for workers who understand how to build data integration frameworks for mainframe, cloud, and cluster data sources is increasing, but the market cannot keep up. Studies have shown that unfilled data engineer jobs and data scientist jobs have increased 12x in the past year alone. As a result, IT needs to figure out how to integrate data for analytics with the skills they have internally.

What Your Cloud Data Platform Needs

A new data management paradigm has emerged that combines the best elements of data lakes and data warehouses, enabling

  • analytics,
  • data science
  • and machine learning

on all your business data: lakehouse.

Lakehouses are enabled by a new system design: implementing similar data structures and data management features to those in a data warehouse, directly on the kind of low-cost storage used for data lakes. They are what you would get if you had to redesign data warehouses in the modern world, now that cheap and highly reliable storage (in the form of object stores) are available.

This new paradigm is the vision for data management that provides the best architecture for modern analytics and AI. It will help organizations capture data from hundreds of sources, including legacy systems, and make that data available and ready for analytics, data science and machine learning.

Lakehouse

A lakehouse has the following key features:

  • Open storage formats, such as Parquet, avoid lock-in and provide accessibility to the widest variety of analytics tools and applications
  • Decoupled storage and compute provides the ability to scale to many concurrent users by adding compute clusters that all access the same storage cluster
  • Transaction support handles failure scenarios and provides consistency when multiple jobs concurrently read and write data
  • Schema management enforces the expected schema when needed and handles evolving schemas as they change over time
  • Business intelligence tools directly access the lakehouse to query data, enabling access to the latest data without the cost and complexity of replicating data across a data lake and a data warehouse
  • Data science and machine learning tools used for advanced analytics rely on the same data repository
  • First-class support for all data types across structured, semi-structured and unstructured, plus batch and streaming data

Click here to access Databricks’ and Precisely’s White Paper

How To Build a CX Program And Transform Your Business

Customer Experience (CX) is a catchy business term that has been used for decades, and until recently, measuring and managing it was not possible. Now, with the evolution of technology, a company can build and operationalize a true CX program.

For years, companies championed NPS surveys, CSAT scores, web feedback, and other sources of data as the drivers of “Customer Experience” – however, these singular sources of data don’t give a true, comprehensive view of how customers feel, think, and act. Unfortunately, most companies aren’t capitalizing on the benefits of a CX program. Less than 10% of companies have a CX executive and of those companies, only 14% believe Customer Experience, as a program, is the aggregation and analysis of all customer interactions with the objective of uncovering and disseminating insights across the company in order to improve the experience. In a time where the customer experience separates the winners from the losers, CX must be more of a priority for ALL businesses.

This not only includes the analysis of typical channels in which customers directly interact with your company (calls, chats, emails, feedback, surveys, etc.) but all the channels in which customers may not be interacting directly with you – social, reviews, blogs, comment boards, media, etc.

CX1

In order to understand the purpose of a CX team and how it operates, you first need to understand how most businesses organize, manage, and carry out their customer experiences today.

Essentially, a company’s customer experience is owned and managed by a handful of teams. This includes, but is not limited to:

  • digital,
  • brand,
  • strategy,
  • UX,
  • retail,
  • design,
  • pricing,
  • membership,
  • logistics,
  • marketing,
  • and customer service.

All of these teams have a hand in customer experience.

In order to affirm that they are working towards a common goal, they must

  1. communicate in a timely manner,
  2. meet and discuss upcoming initiatives and projects,
  3. and discuss results along with future objectives.

In a perfect world, every team has the time and passion to accomplish these tasks to ensure the customer experience is in sync with their work. In reality, teams end up scrambling for information and understanding of how each business function is impacting the customer experience – sometimes after the CX program has already launched.

CX2

This process is extremely inefficient and can lead to serious problems across the customer experience. These problems can lead to irreparable financial losses. If business functions are not on the same page when launching an experience, it creates a broken one for customers. Siloed teams create siloed experiences.

There are plenty of companies that operate in a semi-siloed manner and feel it is successful. What these companies don’t understand is that customer experience issues often occur between the ownership of these silos, in what some refer to as the “customer experience abyss,” where no business function claims ownership. Customers react to these broken experiences by communicating their frustration through different communication channels (chats, surveys, reviews, calls, tweets, posts etc.).

For example, if a company launches a new subscription service and customers are confused about the pricing model, is it the job of customer service to explain it to customers?  What about those customers that don’t contact the business at all? Does marketing need to modify their campaigns? Maybe digital needs to edit the nomenclature online… It could be all of these things. The key is determining which will solve the poor customer experience.

The objective of a CX program is to focus deeply on what customers are saying and shift business teams to become advocates for what they say. Once advocacy is achieved, the customer experience can be improved at scale with speed and precision. A premium customer experience is the key to company growth and customer retention. How important is the customer experience?

You may be saying to yourself, “We already have teams examining our customer data, no
need to establish a new team to look at it.” While this may be true, the teams are likely taking a siloed approach to analyzing customer data by only investigating the portion of the data they own.

For example, the social team looks at social data, the digital team analyzes web feedback and analytics, the marketing team reviews surveys and performs studies, etc. Seldom do these teams come together and combine their data to get a holistic view of the customer. Furthermore, when it comes to prioritizing CX improvements, they do so based on an incomplete view of the customer.

Consolidating all customer data gives a unified view of your customers while lessening the workload and increasing the rate at which insights are generated. The experience customers have with marketing, digital, and customer service, all lead to different interactions. Breaking these interactions into different, separate components is the reason companies struggle with understanding the true customer experience and miss the big picture on how to improve it.

The CX team, once established, will be responsible for creating a unified view of the customer which will provide the company with an unbiased understanding of how customers feel about their experiences as well as their expectations of the industry. These insights will provide awareness, knowledge, and curiosity that will empower business functions to improve the end-to-end customer experience.

CX programs are disruptive. A successful CX program will uncover insights that align with current business objectives and some insights that don’t at all. So, what do you do when you run into that stone wall? How do you move forward when a business function refuses to adopt the voice of the customer? Call in back-up from an executive who understands the value of the voice of the customer and why it needs to be top-of mind for every function.

When creating a disruptive program like CX, an executive owner is needed to overcome business hurdles along the way. Ideally, this executive owner will support the program and promote it to the broader business functions. In order to scale and become more widely adopted, it is also helpful to have executive support when the program begins.

The best candidates for initial ownership are typically marketing, analytics or operations executives. Along with understanding the value a CX program can offer, they should also understand the business’ current data landscape and help provide access to these data sets. Once the CX team has access to all the available customer data, it will be able to aggregate all necessary interactions.

Executive sponsors will help dramatically in regard to CX program adoption and eventual scaling. Executive sponsors

  • can provide the funding to secure the initial success,
  • promote the program to ensure other business functions work closer to the program,
  • and remove roadblocks that may otherwise take weeks to get over.

Although an executive sponsor is not necessary, it can make your life exponentially easier while you build, launch, and execute your CX program. Your customers don’t always tell you what you want to hear, and that can be difficult for some business functions to handle. When this is the case, some business functions will try to discredit insights altogether if they don’t align with their goals.

Data grows exponentially every year, faster than any company can manage. In 2016, 90% of the world’s data had been created in the previous two years. 80% of that data was unstructured language. The hype of “Big Data” has passed and the focus is now on “Big Insights” – how to manage all the data and make it useful. A company should not be allocating resources to collecting more data through expensive surveys or market research – instead, they should be focused on doing a better job of listening and reacting to what customers are already saying, by unifying the voice of the customer with data that is already readily available.

It’s critical to identify all the available customer interactions and determine value and richness. Be sure to think about all forms of direct and indirect interactions customers have. This includes:

CX3

These channels are just a handful of the most popular avenues customers use to engage with brands. Your company may have more, less, or none of these. Regardless, the focus should be on aggregating as many as possible to create a holistic view of the customer. This does not mean only aggregating your phone calls and chats; this includes every channel where your customers talk with, at, or about your company. You can’t be selective when it comes to analyzing your customers by channel. All customers are important, and they may have different ways of communicating with you.

Imagine if someone only listened to their significant other in the two rooms where they spend the most time, say the family room and kitchen. They would probably have a good understanding of the overall conversations (similar to a company only reviewing calls, chats, and social). However, ignoring them in the dining room, bedroom, kids’ rooms, and backyard, would inevitably lead to serious communication problems.

It’s true that phone, chat, and social data is extremely rich, accessible, and popular, but that doesn’t mean you should ignore other customers. Every channel is important. Each is used by a different customer, in a different manner, and serves a different purpose, some providing more context than others.

You may find your most important customers aren’t always the loudest and may be interacting with you through an obscure channel you never thought about. You need every customer channel to fully understand their experience.

Click here to access Topbox’s detailed study

Better practices for compliance management

The main compliance challenges

We know that businesses and government entities alike struggle to manage compliance requirements. Many have put up with challenges for so long—often with limited resources—that they no longer see how problematic the situation has become.

FIVE COMPLIANCE CHALLENGES YOU MIGHT BE DEALING WITH

01 COMPLIANCE SILOS
It’s not uncommon that, over time, separate activities, roles, and teams develop to address different compliance requirements. There’s often a lack of integration and communication among these teams or individuals. The result is duplicated efforts—and the creation of multiple clumsy and inefficient systems. This is then perpetuated as compliance processes change in response to regulations, mergers and acquisitions, or other internal business re-structuring.

02 NO SINGLE VIEW OF COMPLIANCE ASSURANCE
Siloed compliance systems also make it hard for senior management to get an overview of current compliance activities and perform timely risk assessments. If you can’t get a clear view of compliance risks, then chances are good that a damaging risk will slip under the radar, go unaddressed, or simply be ignored.

03 COBBLED TOGETHER, HOME-GROWN SYSTEMS
Using generalized software, like Excel spreadsheets and Word documents, in addition to shared folders and file systems, might have made sense at one point. But, as requirements become more complex, these systems become more frustrating, inefficient, and risky. Compiling hundreds or thousands of spreadsheets to support compliance management and regulatory reporting is a logistical nightmare (not to mention time-consuming). Spreadsheets are also prone to error and limited because they don’t provide audit trails or activity logs.

04 OLD SOFTWARE, NOT DESIGNED TO KEEP UP WITH FREQUENT CHANGES
You could be struggling with older compliance software products that aren’t designed to deal with constant change. These can be increasingly expensive to upgrade, not the most user-friendly, and difficult to maintain.

05 NOT USING AUTOMATED MONITORING
Many compliance teams are losing out by not using analytics and data automation. Instead, they rely heavily on sample testing to determine if compliance controls and processes are working, so huge amounts of activity data is never actually checked.

Transform your compliance management process

Good news! There’s some practical steps you can take to transform compliance processes and systems so that they become way more efficient and far less expensive and painful.

It’s all about optimizing the interactions of people, processes, and technology around regulatory compliance requirements across the entire organization.

It might not sound simple, but it’s what needs to be done. And, in our experience, it can be achieved without becoming massively time-consuming and expensive. Technology for regulatory compliance management has evolved to unite processes and roles across all aspects of compliance throughout your organization.

Look, for example, at how technology like Salesforce (a cloud-based system with big data analytics) has transformed sales, marketing, and customer service. Now, there’s similar technology which brings together different business units around regulatory compliance to improve processes and collaboration for the better.

Where to start?

Let’s look at what’s involved in establishing a technology-driven compliance management process. One that’s driven by data and fully integrated across your organization.

THE BEST PLACE TO START IS THE END

Step 1: Think about the desired end-state.

First, consider the objectives and the most important outcomes of your new process. How will it impact the different stakeholders? Take the time to clearly define the metrics you’ll use to measure your progress and success.

A few desired outcomes:

  • Accurately measure and manage the costs of regulatory and policy compliance.
  • Track how risks are trending over time, by regulation, and by region.
  • Understand, at any point in time, the effectiveness of compliance-related controls.
  • Standardize approaches and systems for managing compliance requirements and risks across the organization.
  • Efficiently integrate reporting on compliance activities with those of other risk management functions.
  • Create a quantified view of the risks faced due to regulatory compliance failures for executive management.
  • Increase confidence and response times around changing and new regulations.
  • Reduce duplication of efforts and maximize overall efficiency.

NOW, WHAT DO YOU NEED TO SUPPORT YOUR OBJECTIVES?

Step 2: Identify the activities and capabilities that will get you the desired outcomes.

Consider the different parts of the compliance management process below. Then identify the steps you’ll need to take or the changes you’ll need to make to your current activity that will help you achieve your objectives. We’ve put together a cheat sheet to help this along.

Galvanize

IDENTIFY & IMPLEMENT COMPLIANCE CONTROL PROCEDURES

  • 01 Maintain a central library of regulatory requirements and internal corporate policies, allocated to owners and managers.
  • 02 Define control processes and procedures that will ensure compliance with regulations and policies.
  • 03 Link control processes to the corresponding regulations and corporate policies.
  • 04 Assess the risk of control weaknesses and failure to comply with regulations and policies.

RUN TRANSACTIONAL MONITORING ANALYTICS

  • 05 Monitor the effectiveness of controls and compliance activities with data analytics.
  • 06 Get up-to-date confirmation of the effectiveness of controls and compliance from owners with automated questionnaires or certification of adherence statements.

MANAGE RESULTS & RESPOND

  • 07 Manage the entire process of exceptions generated from analytic monitoring and from the generation of questionnaires and certifications.

REPORT RESULTS & UPDATE ASSESSMENTS

  • 08 Use the results of monitoring and exception management to produce risk assessments and trends.
  • 09 Identify new and changing regulations as they occur and update repositories and control and compliance procedures.
  • 10 Report on the current status of compliance management activities from high- to low-detail levels.

IMPROVE THE PROCESS

  • 11 Identify duplicate processes and fix procedures to combine and improve controls and compliance tests.
  • 12 Integrate regulatory compliance risk management, monitoring, and reporting with overall risk management activities.

Eight compliance processes in desperate need of technology

01 Centralize regulations & compliance requirements
A major part of regulatory compliance management is staying on top of countless regulations and all their details. A solid content repository includes not only the regulations themselves, but also related data. By centralizing your regulations and compliance requirements, you’ll be able to start classifying them, so you can eventually search regulations and requirements by type, region of applicability, effective dates, and modification dates.

02 Map to risks, policies, & controls
Classifying regulatory requirements is no good on its own. They need to be connected to risk management, control and compliance processes, and system functionality. This is the most critical part of a compliance management system.

Typically, in order to do this mapping, you need:

  • An assessment of non-compliant risks for each requirement.
  • Defined processes for how each requirement is met.
  • Defined controls that make sure the compliance process is effective in reducing non-compliance risks.
  • Controls mapped to specific analytics monitoring tests that confirm the effectiveness on an ongoing basis.
  • Assigned owners for each mapped requirement. Specific processes and controls may be assigned to sub-owners.

03 Connect to data & use advanced analytics

Using different automated tests to access and analyze data is foundational to a data-driven compliance management approach.

The range of data sources and data types needed to perform compliance monitoring can be humongous. When it comes to areas like FCPA or other anti-bribery and corruption regulations, you might need to access entire populations of purchase and payment transactions, general ledger entries, payroll, and travel and entertainment expenses. And that’s just the internal sources. External sources could include things like the Politically Exposed Persons database or Sanctions Checks.

Extensive suites of tests and analyses can be run against the data to determine whether compliance controls are working effectively and if there are any indications of transactions or activities that fail to comply with regulations. The results of these analyses identify specific anomalies and control exceptions, as well as provide statistical data and trend reports that indicate changes in compliance risk levels.

Truly delivering on this step involves using the right technology since the requirements for accessing and analyzing data for compliance are demanding. Generalized analytic software is seldom able to provide more than basic capabilities, which are far removed from the functionality of specialized risk and control monitoring technologies.

04 Monitor incidents & manage issues

It’s important to quickly and efficiently manage instances once they’re flagged. But systems that create huge amounts of “false positives” or “false negatives” can end up wasting a lot of time and resources. On the other hand, a system that fails to detect high risk activities creates risk of major financial and reputational damage. The monitoring technology you choose should let you fine-tune analytics to flag actual risks and compliance failures and minimize false alarms.

The system should also allow for an issues resolution process that’s timely and maintains the integrity of responses. If the people responsible for resolving a flagged issue don’t do it adequately, an automated workflow should escalate the issues to the next level.

Older software can’t meet the huge range of incident monitoring and issues management requirements. Or it can require a lot of effort and expense to modify the procedures when needed.

05 Manage investigations

As exceptions and incidents are identified, some turn into issues that need in-depth investigation. Software helps this investigation process by allowing the user to document and log activities. It should also support easy collaboration of anyone involved in the investigation process.

Effective security must be in place around access to all aspects of a compliance management system. But it’s extra important to have a high level of security and privacy for the investigation management process.

06 Use surveys, questionnaires & certifications

Going beyond just transactional analysis and monitoring, it’s also important to understand what’s actually happening right now, by collecting the input of those working in the front-lines.

Software that has built-in automated surveys and questionnaires can gather large amounts of current information directly from these individuals in different compliance roles, then quickly interpret the responses.

For example, if you’re required to comply with the Sarbanes-Oxley Act (SOX), you can use automated questionnaires and certifications to collect individual sign-off on SOX control effectiveness questions. That information is consolidated and used to support the SOX certification process far more efficiently than using traditional ways of collecting sign-off.

07 Manage regulatory changes

Regulations change constantly, and to remain compliant, you need to know—quickly— when those changes happen. This is because changes can often mean modifications to your established procedures or controls, and that could impact your entire compliance management process.

A good compliance software system is built to withstand these revisions. It allows for easy updates to existing definitions of controls, processes, and monitoring activities.

Before software, any regulatory changes would involve huge amounts of manual activities, causing backlogs and delays. Now much (if not most) of the regulatory change process can be automated, freeing your time to manage your part of the overall compliance program.

08 Ensure regulatory examination & oversight

No one likes going through compliance reviews by regulatory bodies. It’s even worse if failures or weaknesses surface during the examination.

But if that happens to you, it’s good to know that many regulatory authorities have proven to be more accommodating and (dare we say) lenient when your compliance process is strategic, deliberate, and well designed.

There are huge benefits, in terms of efficiency and cost savings, by using a structured and well-managed regulatory compliance system. But the greatest economic benefit happens when you can avoid a potentially major financial penalty as a result of replacing an inherently unreliable and complicated legacy system with one that’s purpose-built and data-driven.

Click here to access Galvanize’s new White Paper

EIOPA Financial Stability Report July 2020

The unexpected COVID-19 virus outbreak led European countries to shut down major part of their economies aiming at containing the outbreak. Financial markets experienced huge losses and flight-to-quality investment behaviour. Governments and central banks committed to the provision of significant emergency packages to support the economy, as the economic shock, caused by demand and supply disruptions accompanied by its reflection to the financial markets, is expected to challenge economic growth, labour market and the consumer sentiment across Europe for an uncertain period of time.

Amid an unprecedented downward shift of interest rate curves during March, reflecting the flight-to-quality behaviour, credit spreads of corporates and sovereigns increased for riskier assets, leading effectively to a double-hit scenario. Equity markets dramatically dropped showing extreme levels of volatility responding to the uncertainties on virus effects and on the status of government and central banks support programs and their effectiveness. Despite the stressed market environment, there were signs of improvement following the announcements of the support packages and during the course of the initiatives of gradually reopening the economies. The virus outbreak also led to extraordinary working conditions, with part of the services sector working from home, which rises the potential of those conditions being preserved after the virus outbreak, which could decrease demand and market value for commercial real estate investments.

Within this challenging environment, insurers are exposed in terms of solvency risk, profitability risk and reinvestment risk. The sudden reassessment of risk premia and the increase of default risk could trigger large-scale rating downgrades and result in decreased investments’ value for insurers and IORPs, especially for exposures to highly indebted corporates and sovereigns. On the other hand, the risk of ultra-low interest rates for long has further increased. Factoring in the knock on effects of the weakening macro economy, future own funds position of the insurers could be further challenged, due to potential lower levels of profitable new business written accompanied by increased volume of profitable in-force policies being surrendered or lapsed.

Finally, liquidity risk has resurfaced, due to the potential of mass lapse type of events and higher than expected virus and litigation related claims accompanied by the decreased inflows of premiums.

EIOPA1

For the European occupational pension sector, the negative impact of COVID-19 on the asset side is mainly driven by deteriorating equity market prices, as, in a number of Member States, IORPs allocate significant proportions of the asset portfolio (up to nearly 60%) in equity investments. However, the investment allocation is highly divergent amongst Member States, so that IORPs in other Member States hold up to 70% of their investments in bonds, mostly sovereign bonds, where the widening of credit spreads impair their market value. The liability side is already pressured due to low interest rates and, where market-consistent valuation is applied, due to low discount rates. The funding and solvency ratios of IORPs are determined by national law and, as could be seen in the 2019 IORP stress test results, have been under pressure and are certainly negatively impacted by this crisis. The current situation may lead to benefit cuts for members and may require sponsoring undertakings to finance funding gaps, which may lead to additional pressure on the real economy and on entities sponsoring an IORP.

EIOPA2

Climate risks remain one of the focal points for the insurance and pension industry, with Environmental, Social and Governance (ESG) factors increasingly shaping investment decisions of insurers and pension funds but also affecting their underwriting. In response to climate related risks, the EU presented in mid-December the European Green Deal, a roadmap for making the EU climate neutral by 2050, providing actions meant to boost the efficient use of resources by

  • moving to a clean, circular economy and stop climate change,
  • revert biodiversity loss
  • and cut pollution.

At the same time, natural catastrophe related losses were milder than previous year, but asymmetrically shifted towards poorer countries lacking relevant insurance coverages.

Cyber risks have become increasingly relevant across the financial system in particular during the virus outbreak due to the new working conditions that the confinement measures imposed. Amid the extraordinary en masse remote working arrangements an increased number of cyber-attacks has been reported on both individuals and healthcare systems. With increasing attention for cyber risks both at national and European level, EIOPA contributed to building a strong, reliable, cyber insurance market by publishing its strategy for cyber underwriting and has also been actively involved in promoting cyber resilience in the insurance and pensions sectors.

Click here to access EIOPA’s detailed Financial Stability Report July 2020

Stress Testing 2.0: Better Informed Decisions Through Expanded Scenario-Based Risk Management

Turning a Regulatory Requirement Into Competitive Advantage

Mandated enterprise stress testing – the primary macro-prudential tool that emerged from the 2008 financial crisis – helps regulators address concerns about the state of the banking industry and its impact on the local and global financial system. These regulatory stress tests typically focus on the largest banking institutions and involve a limited set of prescribed downturn scenarios.

Regulatory stress testing requires a significant investment by financial institutions – in technology, skilled people and time. And the stress testing process continues to become even more complex as programs mature and regulatory expectations keep growing.

The question is, what’s the best way to go about stress testing, and what other benefits can banks realize from this investment? Equally important, should you view stress testing primarily as a regulatory compliance tool? Or can banks harness it as a management tool that links corporate planning and risk appetite – and democratizes scenariobased analysis across the institution for faster, better business decisions?

These are important questions for every bank executive and risk officer to answer because justifying large financial investments in people and technology solely to comply with periodic regulatory requirements can be difficult. Not that noncompliance is ever an option; failure can result in severe damage to reputation and investor confidence.

But savvy financial institutions are looking for – and realizing – a significant return on investment by reaching beyond simple compliance. They are seeing more effective, consistent analytical processes and the ability to address complex questions from senior management (e.g., the sensitivity of financial performance to changes in macroeconomic factors). Their successes provide a road map for those who are starting to build – or are rethinking their approach to – their stress testing infrastructure.

This article reviews the maturation of regulatory stress test regimes and explores diverse use cases where stress testing (or, more broadly, scenario-based analysis) may provide value beyond regulatory stress testing.

Comprehensive Capital Assessments: A Daunting Exercise

The regulatory stress test framework that emerged following the 2008 financial crisis – that banks perform capital adequacy-oriented stress testing over a multiperiod forecast horizon – is summarized in Figure 1. At each period, a scenario exerts its impact on the net profit or loss based on the

  • as-of-date business,
  • including portfolio balances,
  • exposures,
  • and operational income and costs.

The net profit or loss, after being adjusted by other financial obligations and management actions, will determine the capital that is available for the next period on the scenario path.

SAS1

Note that the natural evolution of the portfolio and business under a given scenario leads to a state of the business at the next horizon, which then starts a new evaluation of the available capital. The risk profile of this business evaluation also determines the capital requirement under the same scenario. The capital adequacy assessment can be performed through this dynamic analysis of capital supply and demand.

This comprehensive capital assessment requires cooperation from various groups across business and finance in an institution. But it becomes a daunting exercise on a multiperiod scenario because of the forward-looking and path-dependent nature of the analysis. For this reason, some jurisdictions began the exercise with only one horizon. Over time, these requirements have been revised to cover at least two horizons, which allows banks to build more realistic business dynamics into their analysis.

Maturing and Optimizing Regulatory Stress Testing

Stress testing – now a standard supervisory tool – has greatly improved banking sector resilience. In regions where stress testing capabilities are more mature, banks have built up adequate capital and have performed well in recent years. For example, the board of governors for both the US Federal Reserve System and Bank of England announced good results for their recent stress tests on large banks.

As these programs mature, many jurisdictions are raising their requirements, both quantitively and qualitatively. For example:

  • US CCAR and Bank of England stress tests now require banks to carry out tests on institution-specific scenarios, in addition to prescribed regulatory scenarios.
  • The regions adopting IFRS 9, including the EU, Canada and the UK, are now required to incorporate IFRS 9 estimates into regulatory stress tests. Likewise, banks subject to stress testing in the US will need to incorporate CECL estimates into their capital adequacy tests.
  • Liquidity risk has been incorporated into stress tests – especially as part of resolution and recovery planning – in regions like the US and UK.
  • Jurisdictions in Asia (such as Taiwan) have extended the forecast horizons for their regulatory stress tests.

In addition, stress testing and scenario analysis are now part of Pillar 2 in the Internal Capital Adequacy Assessment Process (ICAAP) published by the Basel Committee on Banking Supervision. Institutions are expected to use stress tests and scenario analyses to improve their understanding of the vulnerabilities that they face under a wide range of adverse conditions. Further uses of regulatory stress testing include the scenariobased analysis for Interest Rate Risk in the Banking Book (IRRBB).

Finally, the goal of regulatory stress testing is increasingly extending beyond completing a simple assessment. Management must prepare a viable mitigation plan should an adverse condition occur. Some regions also require companies to develop “living wills” to ensure the orderly wind-down of institutions and to prevent systemic contagion from an institutional failure.

All of these demands will require the adoption of new technologies and best practices.

Exploring Enhanced Use Cases for Stress Testing Capabilities

As noted by the Basel Committee on Banking Supervision in its 2018 publication Stress Testing Principles, “Stress testing is now a critical element of risk management for banks and a core tool for banking supervisors and macroprudential authorities.” As stress testing capabilities have matured, people are exploring how to use these capabilities for strategic business purposes – for example, to perform “internal stress testing.”

The term “internal stress testing” can seem ambiguous. Some stakeholders don’t understand the various use cases for applying scenario-based analyses beyond regulatory stress testing or doubt the strategic value to internal management and planning. Others think that developing a scenario-based analytics infrastructure that is useful across the enterprise is just too difficult or costly.

But there are, in fact, many high-impact strategic use cases for stress testing across the enterprise, including:

  1. Financial planning.
  2. Risk appetite management.
  3. What-if and sensitivity analysis.
  4. Emerging risk identification.
  5. Reverse stress testing.

Financial Planning

Stress testing is one form of scenario-based analysis. But scenario-based analysis is also useful for forward-looking financial planning exercises on several fronts:

  • The development of business plans and management actions are already required as part of regulatory stress testing, so it’s natural to align these processes with internal planning and strategic management.
  • Scenario-based analyses lay the foundation for assessing and communicating the impacts of changing environmental factors and portfolio shifts on the institution’s financial performance.
  • At a more advanced level, banks can incorporate scenario-based planning with optimization techniques to find an optimal portfolio strategy that performs robustly across a range of scenarios.

Here, banks can leverage the technologies and processes used for regulatory stress testing. However, both the infrastructure and program processes must be developed with flexibility in mind – so that both business-as-usual scenarios and alternatives can be easily managed, and the models and assumptions can be adjusted.

Risk Appetite Management

A closely related topic to stress testing and capital planning is risk appetite. Risk appetite defines the level of risk an institution is willing to take to achieve its financial objectives. According to Senior Supervisors Group (2008), a clearly articulated risk appetite helps financial institutions properly understand, monitor, and communicate risks internally and externally.

Figure 2 illustrates the dynamic relationship between stress testing, risk appetite and capital planning. Note that:

  • Risk appetite is defined by the institution to reflect its capital strategy, return targets and its tolerance for risk.
  • Capital planning is conducted in alignment with the stated risk appetite and risk policy.
  • Scenario-based analyses are then carried out to ensure the bank can operate within the risk appetite under a range of scenarios (i.e., planning, baseline and stressed).

SAS2

Any breach of the stated risk appetite observed in these analyses leads to management action. These actions may include, but are not limited to,

  • enforcement or reallocation of risk limits,
  • revisions to capital planning
  • or adjustments to current risk appetite levels.

What-If and Sensitivity Analysis

Faster, richer what-if analysis is perhaps the most powerful – and demanding – way to extend a bank’s stress testing utility. What-if analyses are often initiated from ad hoc requests made by management seeking timely insight to guide decisions. Narratives for these scenarios may be driven by recent news topics or unfolding economic events.

An anecdotal example illustrates the business value of this type of analysis. Two years ago, a chief risk officer at one of the largest banks in the United States was at a dinner event and heard concerns about Chinese real estate and a potential market crash. He quickly asked his stress testing team to assess the impact on the bank if such an event occurred. His team was able to report back within a week. Fortunately, the result was not bad – news that was a relief to the CRO.

The responsiveness exhibited by this CRO’s stress testing team is impressive. But speed alone is not enough. To really get value from what-if analysis, banks must also conduct it with a reasonable level of detail and sophistication. For this reason, banks must design their stress test infrastructure to balance comprehensiveness and performance. Otherwise, its value will be limited.

Sensitivity analysis usually supplements stress testing. It differs from other scenariobased analyses in that the scenarios typically lack a narrative around them. Instead, they are usually defined parametrically to answer questions about scenario, assumption and model deviations.

Sensitivity analysis can answer questions such as:

  • Which economic factors are the most significant for future portfolio performance?
  • What level of uncertainty results from incremental changes to inputs and assumptions?
  • What portfolio concentrations are most sensitive to model inputs?

For modeling purposes, sensitivity tests can be viewed as an expanded set of scenario analyses. Thus, if banks perform sensitivity tests, they must be able to scale their infrastructure to complete a large number of tests within a reasonable time frame and must be able to easily compare the results.

Emerging Risk Identification

Econometric-based stress testing of portfolio-level credit, market, interest rate and liquidity risks is now a relatively established practice. But measuring the impacts from other risks, such as reputation and strategic risk, is not trivial. Scenario-based analysis provides a viable solution, though it requires proper translation from the scenarios involving these risks into a scenario that can be modeled. This process often opens a rich dialogue across the institution, leading to a beneficial consideration of potential business impacts.

Reverse Stress Testing

To enhance the relevance of the scenarios applied in stress testing analyses, many regulators have required banks to conduct reverse stress tests. For reverse stress tests, institutions must determine the risk factors that have a high impact on their business and determine scenarios that result in the breaching thresholds of specific output metrics (e.g., total capital ratio).

There are multiple approaches to reverse stress testing. Skoglund and Chen proposed a method leveraging risk information measures to decompose the risk factor impact from simulations and apply the results for stress testing. Chen and Skoglund also explained how stress testing and simulation can leverage each other for risk analyses.

Assessing the Impacts of COVID-19

The worldwide spread of COVID-19 in 2020 has presented a sudden shock to the financial plans of lending institutions. Both the spread of the virus and the global response to it are highly dynamic. Bank leaders, seeking a timely understanding of the potential financial impacts, have increasingly turned to scenario analysis. But, to be meaningful, the process must:

  • Scale to an increasing array of input scenarios as the situation continues to develop.
  • Provide a controlled process to perform and summarize numerous iterations of analysis.
  • Provide understandable and explainable results in a timely fashion.
  • Provide process transparency and control for qualitative and quantitative assumptions.
  • Maintain detailed data to support ad hoc reporting and concentration analysis.

Banks able to conduct rapid ad hoc analysis can respond more confidently and provide a data-driven basis for the actions they take as the crisis unfolds.

Conclusion

Regulatory stress testing has become a primary tool for bank supervision, and financial institutions have dedicated significant time and resources to comply with their regional mandates. However, the benefits of scenario-based analysis reach beyond such rote compliance.

Leading banks are finding they can expand the utility of their stress test programs to

  • enhance their understanding of portfolio dynamics,
  • improve their planning processes
  • and better prepare for future crises.

Through increased automation, institutions can

  • explore a greater range of scenarios,
  • reduce processing time and effort,
  • and support the increased flexibility required for strategic scenario-based analysis.

Armed with these capabilities, institutions can improve their financial performance and successfully weather downturns by making better, data-driven decisions.

Click here to access SAS’ latest Whitepaper

Implementing combined audit assurance

ASSESS IMPACT & CREATE AN ASSURANCE MAP

The audit impact assessment and assurance map are interdependent—and the best possible starting point for your combined assurance journey. An impact assessment begins with a critical look at the current or “as is” state of your organization. As you review your current state, you build out your assurance map with your findings. You can’t really do one without the other. The map, then, will reveal any overlaps and gaps, and provide insight into the resources, time, and costs you might require during your implementation. Looking at an assurance map example will give you a better idea of what we’re talking about. The Institute of Chartered Accountants of England and Wales (ICAEW) has an excellent template.

Galv4

The ICAEW has also provided a guide to building a sound assurance map. The institute suggests you take the following steps:

  1. Identify your sponsor (the main user/senior staff member who will act as a champion).
  2. Determine your scope (identify elements that need assurance, like operational/ business processes, board-level risks, governance, and compliance).
  3. Assess the required amount of assurance for each element (understand what the required or desired amount of assurance is across aspects of the organization).
  4. Identify and list your assurance providers in each line of defense (e.g., audit committee or risk committee in the third line).
  5. Identify your assurance activities (compile and review relevant documentation, select and interview area leads, collate and assess assurance provider information).
  6. Reassess your scope (revisit and update your map scope, based on the information you have gathered/evaluated to date).
  7. Assess the quality of your assurance activities (look at breadth and depth of scope, assurance provider competence, how often activities are reviewed, and the strengths/quality of assurance delivered by each line of defense).
  8. Assess the aggregate actual amount of assurance for each element (the total amount of assurance needs to be assessed, collating all the assurance being provided by each line of defense).
  9. Identify the gaps and overlaps in assurance for each element (compare the actual amount of assurance with the desired amount to determine if there are gaps or overlaps).
  10. Determine your course of action (make recommendations for the actions to be taken/activities to be performed moving forward).

Just based on the steps above, you could understand how your desired state evolves by the time you reach step 10. Ideally, by this point, gaps and overlaps have been eliminated. But the steps we just reviewed don’t cover the frequency of each review and they don’t determine costs. So we’ve decided to add a few more steps to round it out:

  1. Assess the frequency of each assurance activity.
  2. Identify total cost for all the assurance activities in the current state.
  3. Identify the total cost for combined assurance (i.e., when gaps and overlaps have been addressed, and any consequent benefits or cost savings).

DEFINE THE RISKS OF IMPLEMENTATION

Implementing combined assurance is a project, and like any project, there’s a chance it can go sideways and fail, losing you both time and money. So, just like anything else in business, you need to take a risk-based approach. As part of this stage, you’ll want to clearly define the risks of implementing a combined assurance program, and add these risks, along with a mitigation plan and the expected benefits, to your tool kit. As long as the projected benefits of the project outweigh the residual risks and costs, the implementation program is worth pursuing. You’ll need to be able to demonstrate that a little further down the process.

DEFINE RESOURCES & DELIVERABLES

Whoever will own the project of implementing combined assurance will no doubt need dedicated resources in order to execute. So, who do we bring in? On first thought, the internal audit team looks best suited to drive the program forward. But, during the implementation phase, you’ll actually want a cross-functional team of people from internal control, risk, and IT, to work alongside internal audit. So, when you’re considering resourcing, think about each and every team this project touches. Now you know who’s going to do the work, you’ll want to define what they’re doing (key milestones) and when it will be delivered (time frame). And finally, define the actual benefits, as well as the tangible deliverables/outcomes of implementing combined assurance. (The table below provides some examples, but each organization will be unique.)

Galv1

RAISE AWARENESS & GET MANAGEMENT COMMITMENT

Congratulations! You’re now armed with a fancy color-coded impact assessment, and a full list of risks, resources, and deliverables. The next step is to clearly communicate and share the driving factors behind your combined assurance initiative. If you want them to support and champion your efforts, top management will need to be able to quickly take in and understand the rationale behind your desire for combined assurance. Critical output: You’ll want to create a presentation kit of sorts, including the assurance map, lists of risks, resources, and deliverables, a cost/benefit analysis, and any supporting research or frameworks (e.g., the King IV Report, FRC Corporate Governance Code, available industry analysis, and case studies). Chances are, you’ll be presenting this concept more than once, so if you can gather and organize everything in a single spot, that will save a lot of headaches down the track.

ASSIGN ACCOUNTABILITY

When we ask the question, “Who owns the implementation of combined assurance?”, we need to consider two main things:

  • Who would be most impacted if combined assurance were implemented?
  • Who would be senior enough to work across teams to actually get the job done?

It’s evident that a board/C-level executive should lead the project. This project will be spanning multiple departments and require buy-in from many people—so you need someone who can influence and convince. Therefore, we feel that the chief audit executive (CAE) and/or the chief revenue officer (CRO) should be accountable for implementing combined assurance. The CAE literally stands at the intersection of internal and external assurance. Where reliance is placed on the work of others, the CAE is still accountable and responsible for ensuring adequate support for conclusions and opinions reached by the internal audit activity. And the CRO is taking a more active interest in assurance maps as they become increasingly more risk-focused. The Institute of Internal Auditors (IIA), Standard 2050, also assigns accountability to the CAE, stating: “The chief audit executive should share information and coordinate activities with other internal and external assurance providers and consulting services to ensure proper coverage and minimize duplication of effort.” So, not only is the CAE at the intersection of assurance, they’re also directing traffic—exactly the combination we need to drive implementation.

Envisioning the solution

You’ve summarized the current/“as is” state in your assurance map. Now it’s time to move into a future state of mind and envision your desired state. What does your combined assurance solution look like? And, more critically, how will you create it? This stage involves more assessment work. Only now you’ll be digging into the maturity levels of your organization’s risk management and internal audit process, as well as the capabilities and maturity of your Three Lines of Defense. This is where you answer the questions, “What do I want?”, and “Is it even feasible?” Some make-or-break capability factors for implementing combined assurance include:

  1. Corporate risk culture Risk culture and risk appetite shape an organization’s decision-making, and that culture is reflected at every level. Organizations who are more risk-averse tend to be unwilling to make quick decisions without evidence and data. On the other hand, risk-tolerant organizations take more risks, make rapid decisions, and pivot quickly, often without performing due diligence. How will your risk culture shape your combined assurance program?
  2. Risk management awareness If employees don’t know—and don’t prioritize— how risk can and should be managed in your organization, your implementation program will fail. Assurance is very closely tied to risk, so it’s important to communicate constantly and make people aware that risk at every level must be adequately managed.
  3. Risk management processes We just stated that risk and assurance are tightly coupled, so it makes sense that the more mature your risk management processes are, the easier it will be to implement combined assurance. Mature risk management means you’ve got processes defined, documented, running, and refined. For the lucky few who have all of these things, you’re going to have a much easier time compared to those who don’t.
  4. Risk & controls taxonomy Without question, you will require a common risk and compliance language. We can’t have people making up names for tools, referring to processes in different ways, or worst of all, reporting on totally random KPIs. The result of combined assurance should be “one language, one voice, one view” of the risks and issues across the organization.
  5. System & process integrations An integrated system where there is one set of risks and one set of controls is key to delivering effective combined assurance. This includes: Risk registers across the organization, Controls across the organization Issues and audit findings, Reporting.
  6. Technology use Without dedicated software technology, it’s extremely difficult to provide a sustainable risk management system with sound processes, a single taxonomy, and integrated risks and controls. How technology is used in your organization will determine the sustainability of combined assurance. (If you already have a risk management and controls platform that has these integration capabilities, implementation will be easier.)
  7. Using assurance maps as monitoring tools Assurance maps aren’t just for envisioning end-states; they’re also critical monitoring tools that can feed data into your dashboard. They can inform your combined assurance dashboard, to help report on progress.
  8. Continuous improvement mechanisms A mature program will always have improvement mechanisms and feedback loops to incorporate user and stakeholder feedback. A lack of this feedback mechanism will impact the continued effectiveness of combined assurance.

We now assess the maturity of these factors (plus any others that you find relevant) and rank them on a scale of 1-4:

  • Level 1: Not achieved (0-15% of target).
  • Level 2: Partially achieved (15-50%).
  • Level 3: Largely achieved (50-85%).
  • Level 4: Achieved (85-100%).

This rating scale is based on the ISO/IEC 15504 that assigns a rating to the degree each objective (process capability) is achieved. An example of a combined assurance capability maturity assessment can be seen in Figure 2.

Galv2

GAP ANALYSIS

Once the desired levels for all of the factors are agreed on and endorsed by senior management, the next step is to undertake a gap analysis. The example in Figure 2 shows that the current overall maturity level is a 2 and the desired level is a 3 or 4 for each factor. The gap for each factor needs to be analyzed for the activities and resources required to bridge it. Then you can envision the solution and create a roadmap to bridge the gap(s).

SOLUTION VISION & ROADMAP

An example solution vision and roadmap could be:

  • We will use the same terminology and language for risk in all parts of the organization, and establish a single risk dictionary as a central repository.
  • All risks will be categorized according to severity and criticality and be mapped to assurance providers to ensure that no risk is assessed by more than one provider.
  • A rolling assurance plan will be prepared to ensure that risks are appropriately prioritized and reviewed at least once every two years.
  • An integrated, real-time report will be available on demand to show the status, frequency, and coverage of assurance activities.
  • The integrated report/assurance map will be shared with the board, audit committee, and risk committee regularly (e.g., quarterly or half-yearly).
  • To enable these capabilities, risk capture, storage, and reporting will be automated using an integrated software platform.

Figure 3 shows an example roadmap to achieve your desired maturity level.

Galv3

Click here to access Galvanize’s Risk Manangement White Paper

 

Fintech, regtech and the role of compliance in 2020

The ebb and flow of attitudes on the adoption and use of technology has evolving ramifications for financial services firms and their compliance functions, according to the findings of the Thomson Reuters Regulatory Intelligence’s fourth annual survey on fintech, regtech and the role of compliance. This year’s survey results represent the views and experiences of almost 400 compliance and risk practitioners worldwide.

During the lifetime of the report it has had nearly 2,000 responses and been downloaded nearly 10,000 times by firms, risk and compliance practitioners, regulators, consultancies, law firms and global systemically-important financial institutions (G-SIFIs). The report also highlights the shifting role of the regulator and concerns about best or better practice approaches to tackle the rise of cyber risk. The findings have become a trusted source of insight for firms, regulators and their advisers alike. They are intended to help regulated firms with planning, resourcing and direction, and to allow them to benchmark whether their resources, skills, strategy and expectations are in line with those of the wider industry. As with previous reports, regional and G-SIFI results are split out where they highlight any particular trend. One challenge for firms is the need to acquire the skill sets which are essential if they are to reap the expected benefits of technological solutions. Equally, regulators and policymakers need to have the appropriate up-todate skillsets to enable consistent oversight of the use of technology in financial services. Firms themselves, and G-SIFIs in particular, have made substantial investments in skills and the upgrading of legacy systems.

Key findings

  • The involvement of risk and compliance functions in their firm’s approach to fintech, regtech and insurtech continues to evolve. Some 65% of firms reported their risk and compliance function was either fully engaged and consulted or had some involvement (59% in prior year). In the G-SIFI population 69% reported at least some involvement with those reporting their compliance function as being fully engaged and consulted almost doubling from 13% in 2018, to 25% in 2019. There is an even more positive picture presented on increasing board involvement in the firm’s approach to fintech, regtech and insurtech. A total of 62% of firms reported their board being fully engaged and consulted or having some involvement, up from 54% in the prior year. For G-SIFIs 85% reported their board being fully engaged and consulted or having some involvement, up from 56% in the prior year. In particular, 37% of G-SIFIs reported their board was fully engaged with and consulted on the firm’s approach to fintech, regtech and insurtech, up from 13% in the prior year.
  • Opinion on technological innovation and digital disruption has fluctuated in the past couple of years. Overall, the level of positivity about fintech innovation and digital disruption has increased, after a slight dip in 2018. In 2019, 83% of firms have a positive view of fintech innovation (23% extremely positive, 60% mostly positive), compared with 74% in 2018 and 83% in 2017. In the G-SIFI population the positivity rises to 92%. There are regional variations, with the UK and Europe reporting a 97% positive view at one end going down to a 75% positive view in the United States.
  • There has been a similar ebb and flow of opinion about regtech innovation and digital disruption although at lower levels. A total of 77% reported either an extremely or mostly positive view, up from 71% in the prior year. For G-SIFIs 81% had a positive view, up from 76% in the prior year.
  • G-SIFIs have reported a significant investment in specialist skills for both risk and compliance functions and at board level. Some 21% of G-SIFIs reported they had invested in and/or appointed people with specialist skills to the board to accommodate developments in fintech, insurtech and regtech, up from 2% in the prior year. This means in turn 79% of G-SIFIs have not completed their work in this area, which is potentially disturbing. Similarly, 25% of G-SIFIs have invested in specialist skills for the risk and compliance functions, up from 9% in the prior year. In the wider population 10% reported investing in specialist skills at board level and 16% reported investing in specialist skills for the risk and compliance function. A quarter (26%) reported they have yet to invest in specialist skills for the risk and compliance function, but they know it is needed (32% for board-level specialist skills). Again, these figures suggest 75% of G-SIFIs have not fully upgraded their risk and compliance functions, rising to 84% in the wider population.
  • The greatest financial technology challenge firms expect to face in the next 12 months have changed in nature since the previous survey, with the top three challenges cited as keeping up with technological advancements; budgetary limitations, lack of investment and cost; and data security. In prior years, the biggest challenges related to the need to upgrade legacy systems and processes as well as budgetary limitations, the adequacy and availability of skilled resources together with the need for cyber resilience. In terms of the greatest benefits expected to be seen from financial technology in the next 12 months the top three are a strengthening of operational efficiency, improved services for customers and greater business opportunities.
  • G-SIFIs are leading the way on the implementation of regtech solutions. Some 14% of G-SIFIs have implemented a regtech solution, up from 9% in the prior year with 75% (52% in the prior year) reporting they have either fully or partially implemented a regtech solution to help manage compliance. In the wider population, 17% reported implementing a regtech solution, up from 8% in the prior year. The 2018 numbers overall showed a profound dip from 2017 when 29% of G-SIFIs and 30% of firms reported implementing a regtech solution, perhaps highlighting that early adoption of regtech solutions was less than smooth.
  • Where firms have not yet deployed fintech or regtech solutions various reasons were cited as to what was holding them back. Significantly, one third of firms cited lack of investment; a similar number of firms pointed to a lack of in-house skills and information security/data protection concerns. Some 14% of  firms and 12% of G-SIFIs reported they had taken a deliberate strategic decision not to deploy fintech or regtech solutions yet.
  • There continues to be substantial variation in the overall budget available for regtech solutions. A total of 38% of firms (31% in prior year) reported that the expected budget would grow in the coming year, however, 31% said they lack a budget for regtech (25% in the prior year). For G-SIFIs 48% expected the budget to grow (36% in prior year), with 12% reporting no budget for regtech solutions (6% in the prior year).

Focus : Challenges for firms

Technological challenges for firms come in all shapes and sizes. There is the potential, marketplace changing, challenge posed by the rise of bigtech. There is also the evolving approach of regulators and the need to invest in specialist skill sets. Lastly, there is the emerging need to keep up with technological advances themselves.

TR10

The challenges for firms have moved on. In the first three years of the report the biggest financial technology challenge facing firms was that of the need to upgrade legacy systems and processes. This year the top three challenges are expected to be the need to keep up with technology advancements; perceived budgetary limitations, lack of investment and cost, and then data security.

Focus : Cyber risk

Cyber risk and the need to be cyber-resilient is a major challenge for financial services firms which are targets for hackers. They must be prepared and be able to respond to any kind of cyber incident. Good customer outcomes will be under threat if cyber resilience fails.

One of the most prevalent forms of cyber attack is ransomware. There are different types of ransomware, all of which will seek to prevent a firm or an individual from using their IT systems and will ask for something (usually payment of a ransom) to be done before access will be restored. Even then, there is no guarantee that paying the fine or acceding to the ransomware attacker’s demands will restore full access to all IT systems, data or files. Many firms have found that critical files often containing client data have been encrypted as part of an attack and large amounts of money are demanded for restoration. Encryption is in this instance used as a weapon and it can be practically impossible to reverse-engineer the encryption or “crack” the files without the original encryption key – which cyber attackers deliberately withhold. What was previously viewed often as an IT problem has become a significant issue for risk and compliance functions. The regulatory stance is typified by the UK Financial Conduct Authority (FCA) which has said its goal is to “help firms become more resilient to cyber attacks, while ensuring that consumers are protected and market integrity is upheld”. Regulators do not expect firms to be impervious but do expect cyber risk management to become a core competency.

Good and better practice on defending against ransomware attacks Risk and compliance officers do not need to become technological experts overnight but must ensure cyber risks are effectively managed and reported on within their firm’s corporate governance framework. For some compliance officers, cyber risk may be well outside their comfort zone but there is evidence that simple steps implemented rigorously can go a long way towards protecting a firm and its customers. Any basic cyber-security hygiene aimed at protecting businesses from ransomware attacks should make full use of the wide range of resources available on cyber resilience, IT security and protecting against malware attacks. The UK National Cyber Security Centre has produced some practical guidance on how organizations can protect themselves in cyberspace, which it updates regularly. Indeed, the NCSC’s 10 steps to cyber security have now been adopted by most of the FTSE350.

TR11

Closing thoughts

The financial services industry has much to gain from the effective implementation of fintech, regtech and insurtech but practical reality is there are numerous challenges to overcome before the potential benefits can be realised. Investment continues to be needed in skill sets, systems upgrades and cyber resilience before firms can deliver technological innovation without endangering good customer outcomes.

An added complication is the business need to innovate while looking over one shoulder at the threat posed by bigtech. There are also concerns for solution providers. The last year has seen many technology start-ups going bust and far fewer new start-ups getting off the ground – an apparent parallel, at least on the surface, to the bubble that was around dotcom. Solutions need to be practical, providers need to be careful not to over promise and under deliver and above all developments should be aimed at genuine problems and not be solutions looking for a problem. There are nevertheless potentially substantive benefits to be gained from implementing fintech, regtech and insurtech solutions. For risk and compliance functions much of the benefit may come from the ability to automate rote processes with increasing accuracy and speed. Indeed, when 900 respondents to the 10th annual cost of compliance survey report were asked to look into their crystal balls and predict the biggest change for compliance in the next 10 years, the largest response was automation.

Technology and its failure or misuse is increasingly being linked to the personal liability and accountability of senior managers. Chief executives, board members and other senior individuals will be held accountable for failures in technology and should therefore ensure their skill set is up-to-date. Regulators and politicians alike have shown themselves to be increasingly intolerant of senior managers who fail to take the expected reasonable steps with regards to any lack of resilience in their firm’s technology.

This year’s findings suggest firms may find it beneficial to consider:

  • Is fintech (and regtech) properly considered as part of the firm’s strategy? It is important for regtech especially not to be forgotten about in strategic terms: a systemic failure arising from a regtech solution has great capacity to cause problems for the firm – the UK FCA’s actions on regulatory reporting, among other things, are an indicator of this.
  • Not all firms seem to have fully tackled the governance challenge fintech implies: greater specialist skills may be needed at board level and in risk and compliance functions.
  • Lack of in-house skills was given as a main reason for failing to develop fintech or regtech solutions. It is heartening that firms understand the need for those skills. As fintech/regtech becomes mainstream, however, firms may be pressed into developing such solutions. Is there a plan in place to plug the skills gap?
  • Only 22% of firms reported that they need more resources to evaluate, understand and deploy fintech/ regtech solutions. This suggests 78% of firms are unduly relaxed about the resources needed in the second line of defence to ensure fintech/regtech solutions are properly monitored. This may be a correct conclusion, but seems potentially bullish.

Click here to access Thomson Reuters’ Survey Results

Benchmarking digital risk factors facing financial service firms

Risk management is the foundation upon which financial institutions are built. Recognizing risk in all its forms—measuring it, managing it, mitigating it—are all critical to success. But has every firm achieved that goal? It doesn’t take indepth research beyond the myriad of breach headlines to answer that question.

But many important questions remain: What are key dimensions of the financial sector Internet risk surface? How does that surface compare to other sectors? Which specific industries within Financial Services appear to be managing that risk better than others? We take up these questions and more in this report.

  1. The financial sector boasts the lowest rate of high and critical security exposures among all sectors. This indicates they’re doing a good job managing risk overall.
  2. But not all types of financial service firms appear to be managing risk equally well. For example, the rate of severe findings in the smallest commercial banks is 4x higher than that of the largest banks.
  3. It’s not just small community banks struggling, however. Securities and Commodities firms show a disconcerting combination of having the largest deployment of high-value assets AND the highest rate of critical security exposures.
  4. Others appear to be exceeding the norm. Take credit card issuers: they typically have the largest Internet footprint but balance that by maintaining the lowest rate of security exposures.
  5. Many other challenges and risk factors exist. For instance, the industry average rate of severe security findings in critical cloud-based assets is 3.5x that of assets hosted on-premises.

Dimensions of the Financial Sector Risk Surface

As Digital Transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. We view the risk surface as anywhere an organization’s ability to operate, reputation, assets, legal obligations, or regulatory compliance is at risk. The aspects of a firm’s risk exposure that are associated with or observable from the internet are considered its internet risk surface. In Figure 1, we compare five key dimensions of the internet risk surface across different industries and highlight where the financial sector ranks among them.

  • Hosts: Number of internet-facing assets associated with an organization.
  • Providers: Number of external service providers used across hosts.
  • Geography: Measure of the geographic distribution of a firm’s hosts.
  • Asset Value: Rating of the data sensitivity and business criticality of hosts based on multiple observed indicators. High value systems that include those that collect GDPR and CCPA regulated information.
  • Findings: Security-relevant issues that expose hosts to various threats, following the CVSS rating scale.

TR1

The values recorded in Figure 1 for these dimensions represent what’s “typical” (as measured by the mean or median) among organizations within each sector. There’s a huge amount of variation, meaning not all financial institutions operate more external hosts than all realtors, but what you see here is the general pattern. The blue highlights trace the ranking of Finance along each dimension.

Financial firms are undoubtedly aware of these tendencies and the need to protect those valuable assets. What’s more, that awareness appears to translate fairly effectively into action. Finance boasts the lowest rate of high and critical security exposures among all sectors. We also ran the numbers specific to high-value assets, and financial institutions show the lowest exposure rates there too. All of this aligns pretty well with expectations—financial firms keep a tight rein on their valuable Internet-exposed assets.

This control tendency becomes even more apparent when examining the distribution of hosts with severe findings in Figure 2. Blue dots mark the average exposure rate for the entire sector (and correspond to values in Figure 1), while the grey bars indicate the amount of variation among individual organizations within each sector. The fact that Finance exhibits the least variation shows that even rotten apples don’t fall as far from the Finance tree as they often do in other sectors. Perhaps a rising tide lifts all boats?

TR2

Security Exposures in Financial Cloud Deployments

We now know financial institutions do well minimizing security findings, but does that record stand equally strong across all infrastructure? Figure 3 answers that question by featuring four of the five key risk surface dimensions:

  • the proportion of hosts (square size),
  • asset value (columns),
  • hosting location (rows),
  • and the rate of severe security findings (color scale and value label).

This view facilitates a range of comparisons, including the relative proportion of assets hosted internally vs. in the cloud, how asset value distributes across hosting locales, and where high-severity issues accumulate.

TR3

From Figure 3, box sizes indicate that organizations in the financial sector host a majority of their Internet-facing systems on-premises, but do leverage the cloud to a greater degree for low-value assets. The bright red box makes it apparent that security exposures concentrate more acutely in high-value assets hosted in the cloud. Overall, the rate of severe findings in cloud-based assets is 3.5x that of on-prem. This suggests the angst many financial firms have over moving to the cloud does indeed have some merit. But when we examine the Finance sector relative to others in Figure 4 the intensity of exposures in critical cloud assets appears much less drastic.

In Figure 3, we can see that the largest number of hosts are on-prem and of medium value. But high-value assets in the cloud exhibit the highest rate of findings.

Given that cloud vs. on-prem exposure disparity, we feel the need to caution against jumping to conclusions. We could interpret these results to proclaim that the cloud isn’t ready for financial applications and should be avoided. Another interpretation could suggest that it’s more about organizational readiness for the cloud than the inherent insecurity of the cloud. Either way, it appears that many financial institutions migrating to the cloud are handling that paradigm shift better than others.

It must also be noted that not all cloud environments are the same. Our Cloud Risk Surface report discovered an average 12X difference between cloud providers with the highest and lowest exposure rates. We still believe this says more about the typical users and use cases of the various cloud platforms than any intrinsic security inequalities. But at the same time, we recommend evaluating cloud providers based on internal features as well as tools and guidance they make available to assist customers in securing their environments. Certain clouds are undoubtedly a better match for financial services use cases while others less so.

TR4

Risk Surface of Subsectors within Financial Services

Having compared Finance to other sectors at a high level, we now examine the risk surface of major subsectors of financial services according to the following NAICS designations:

  • Insurance Carriers: Institutions engaged in underwriting and selling annuities, insurance policies, and benefits.
  • Credit Intermediation: Includes banks, savings institutions, credit card issuers, loan brokers, and processors, etc.
  • Securities & Commodities: Investment banks, brokerages, securities exchanges, portfolio management, etc.
  • Central Banks: Monetary authorities that issue currency, manage national money supply and reserves, etc.
  • Funds & Trusts: Funds and programs that pool securities or other assets on behalf of shareholders or beneficiaries.

TR5

Figure 5 compares these Finance subsectors along the same dimensions used in Figure 1. At the top, we see that Insurance Carriers generally maintain a large Internet surface area (hosts, providers, countries), but a comparatively lower ranking for asset value and security findings. The Credit Intermediation subsector (the NAICS designation that includes banks, brokers, creditors, and processors) follows a similar pattern. This indicates that such organizations are, by and large, able to maintain some level of control over their expanding risk surface.

A leading percentage of high-value assets and a leading percentage of highly critical security findings for the Securities and Commodities subsector is a disconcerting combination. It suggests either unusually high risk tolerance or ineffective risk management (or both), leaving those valuable assets overexposed. The Funds and Trusts subsector exhibits a more riskaverse approach to minimizing exposures across its relatively small digital footprint of valuable assets.

Risk Surface across Banking Institutions

Given that the financial sector is so broad, we thought a closer examination of the risk surface particular to banking institutions was in order. Banks have long concerned themselves with risk. Well before the rise of the Internet or mobile technologies, banks made their profits by determining how to gauge the risk of potential borrowers or loans, plotting the risk and reward of offering various deposit and investment products, or entering different markets, allowing access through several delivery channels. It could be said that the successful management and measurement of risk throughout an organization is perhaps the key factor that has always determined the relative success or failure of any bank.

As a highly-regulated industry in most countries, banking institutions must also consider risk from more than a business or operational perspective. They must take into account the compliance requirements to limit risk in various areas, and ensure that they are properly securing their systems and services in a way that meets regulatory standards. Such pressures undoubtedly affect the risk surface and Figure 6 hints at those effects on different types of banking institutions.

Credit card issuers earn the honored distinction of having the largest average number of Internet-facing hosts (by far) while achieving the lowest prevalence of severe security findings. Credit unions flip this trend with the fewest hosts and most prevalent findings. This likely reflects the perennial struggle of credit unions to get the most bang from their buck.

Traditionally well-resourced commercial banks leverage the most third party providers and have a presence in more countries, all with a better-than-average exposure rate. Our previous research revealed that commercial banks were among the top two generators and receivers of multi-party cyber incidents, possibly due to the size and spread of their risk surface.

TR6

Two Things to Consider

  1. In this interconnected world, third-party and fourth-party risk is your risk. If you are a financial institution, particularly a commercial bank, take a moment to congratulate yourself on managing risk well – but only for a moment. Why? Because every enterprise is critically dependent on a wide array of vendors and partners that span a broad spectrum of industries. Their risk is your risk. The work of your third-party risk team is critically important in holding your vendors accountable to managing your risk interests well.
  2. Managing risk—whether internal or third-party—requires focus. There are simply too many things to do, giving rise to the endless “hamster wheel of risk management.” A better approach starts with obtaining an accurate picture of your risk surface and the critical exposures across it. This includes third-party relationships, and now fourth-party risk, which bank regulators are now requiring. Do you have the resources to sufficiently manage this? Do you know your risk surface?

Click here to access Riskrecon Cyentia’s Study