The evolution of GRC

Attitudes to governance, risk and compliance (GRC) activities are changing among Tier 1 financial institutions. The need to keep up with rapid regulatory change, and the pressure of larger, more publicised penalties dealt out by regulators in recent years have prompted an evolution in how risk is viewed and managed. Financial firms also face an increasingly volatile market environment that requires them to remain nimble – not just to survive, but to thrive.

As a result of these market developments, GRC is now seen, rather than as one strand of the business, as a far more integrated activity with many companies realigning resources around the ‘three lines of defence’ model. GRC is increasingly being treated as an enterprise-wide responsibility by organisations that are successfully navigating these challenging times for global financial markets. This shift in attitudes is also leading to a rethink in relation to the tools used by all three lines of defence to participate in GRC activities. Some are exploring more innovative solutions to support and engage infrequent users – particularly those in the first line of defence (1LoD). The more intuitive design of such tools enables these users to take a more active role in risk-aware decision-making.

These and other innovations promise to bring greater effectiveness and efficiency to an area into which firms have channelled increasing levels of resource in recent years but are struggling to keep up with demand. A recent survey carried out by Risk.net and IBM found that risk and compliance professionals acknowledge the limitations of existing operational risk and regulatory compliance tools and systems to satisfy current and future GRC requirements. The survey polled 106 senior risk, compliance, audit and legal executives at financial firms including banks (53%), insurance companies (21%) and asset management firms (12%). The results revealed that nearly one third of these respondents remain unimpressed with the effectiveness of their organisation’s ability to cope with the complexity and pace of regulatory change. Nearly half gave a similar response regarding their organisation’s efficiency in this area.

With these issues in mind, many of the firms surveyed have started to explore user-experience needs more deeply and combine the results with artificial intelligence (AI) capabilities to further develop GRC systems and processes. These capabilities are designed to enhance compliance systems and processes and make them more intuitive for all. As such, user-experience research and design has become a key consideration for organisations wanting to ensure employees across all three lines of defence can participate more fully in GRC activities. In addition, AI-powered tools can help 1LoD business users better manage risk and ensure compliance by increasing the efficiency and effectiveness of these GRC systems and processes. The survey shows that, while some organisations are already developing these types of solutions, there is still room for greater understanding of the benefits of new and innovative forms of technology throughout the global financial markets. For instance, nearly half of respondents to the survey, when asked about the benefits of AI for GRC activities, were unsure of the potential time efficiencies such tools can bring. More than one-quarter were undecided on whether AI would free up employees’ time to focus on more strategic tasks.

Many organisations are still considering how to move forward in this area, but it will be those that truly embrace user-focused tools and leverage innovative technologies such as AI and advanced analytics to increase efficiencies that can expect to reap the rewards of successfully managing regulatory change and tackling market volatility.

LoD

Current and Future Applications

The survey highlights that financial firms already recognise that these solutions can be used to more efficiently manage the regulatory change process. For example, AI-based solutions can provide smart alerts to highlight the most relevant regulatory changes – 35% of survey respondents see AI as offering the biggest potential improvements in this area.

Improving the speed and accuracy of classification and reporting of information – for example, in relation to loss events – was another area identified for its high AI potential. Nearly one-third of respondents (31%) see possibilities for improvement of current GRC processes in this area. Some financial firms have already started to reap the rewards of this type of approach. Larger firms are typically ahead of the game with such developments, often having more resources to put into research and development. Out of the 13% of larger firms that have seen a decrease in GRC resources over the past year, one-third of survey respondents attribute that to “tools and automation improvements”.

Similarly, 44% of those polled work at organisations already making improvements to improve end-to-end time and user experience in relation to GRC processes and tools. A further 19% plan to do this in the next 12 months and, in line with this, 64% of survey respondents expect their firm’s GRC resources to increase over the next 24 months (see figure 8). While it is not clear from the survey whether these additional resources will be specifically directed towards AI, more than 80% of respondents work at organisations currently considering AI for a range of GRC activities.

The most popular use of AI among financial firms is to improve the speed and/or accuracy of classification and reporting information, such as loss events – 19% of respondents say their organisation is currently using AI for this purpose, with 81% currently considering this type of use. Such events happen fairly infrequently, so training employees to classify and enter such information can be time consuming, but incorrect classification can have a real impact on data quality. By using natural language processing (NLP) tools to understand and categorise loss events automatically, organisations can streamline the time and resources required to train employees to collect and manage this information.

According to the survey, 83% of respondents are also currently considering the use of AI tools to develop smart alerts that will highlight any new rules or updates to existing regulations, helping financial firms manage regulatory change more efficiently. Many organisations already receive an overwhelming amount of alerts every day relating to new rules or changes, but some or all of these changes may not actually apply to their businesses. AI can be used to tailor these alerts to ensure compliance teams only receive the most relevant alerts. Using NLP to create this mechanism can be the difference between sorting through 100 alerts in one day and receiving one smart alert that has been identified by an AI-powered solution.

Control mapping is another area to which AI can add value. When putting controls in place relating to specific obligations within a regulation, for example, compliance teams can either create a new control or, using NLP, detect whether there is already an applicable control in place that can be mapped to record the organisation’s compliance with the rule. This reduces the amount of time spent by the team reading and understanding new legislation or rule changes to determine applicability, as well as improving accuracy and reducing duplicate controls.

Click here to access IBM’s White Paper

EIOPA’s Supervisory Statement Solvency II: Application of the proportionality principle in the supervision of the Solvency Capital Requirement

EIOPA identified potential divergences in the supervisory practices concerning the supervision of the SCR calculation of immaterial sub-modules.

EIOPA agrees that in case of immaterial SCR sub-modules the principle of proportionality applies regarding the supervisory review process, but considers it is important to guarantee supervisory convergence as divergent approaches could lead to supervisory arbitrage.

EIOPA is of the view that the consistent implementation of the proportionality principle is a key element to ensure supervisory convergence for the supervision of the SCR. For this purpose the following key areas should be considered:

Proportionate approach

Supervisory authorities may allow undertakings, when calculating the SCR at the individual undertaking level, to adopt a proportionate approach towards immaterial SCR sub-modules, provided that the undertaking concerned is able to demonstrate to the satisfaction of the supervisory authorities that:

  1. the amount of the SCR sub-module is immaterial when compared with the total basic SCR (BSCR);
  2. applying a proportionate approach is justifiable taking into account the nature and complexity of the risk;
  3. the pattern of the SCR sub-module is stable over the last three years;
  4. such amount/pattern is consistent with the business model and the business strategy for the following years; and
  5. undertakings have in place a risk management system and processes to monitor any evolution of the risk, either triggered by internal sources or by an external source that could affect the materiality of a certain submodule.

This approach should not be used when calculating SCR at group level.

An SCR sub-module should be considered immaterial for the purposes of the SCR calculation when its amount is not relevant for the decision-making process or the judgement of the undertaking itself or the supervisory authorities. Following this principle, even if materiality needs to be assessed on a case-by-case basis, EIOPA recommends that materiality is assessed considering the weight of the sub-modules in the total BSCR and

  • that each sub-module subject to this approach should not represent more than 5% of the BSCR
  • or all sub-modules should not represent more than 10% of the BSCR.

For immaterial SCR sub-modules supervisory authorities may allow undertakings not to perform a full recalculation of such a sub-module on a yearly basis taking into consideration the complexity and burden that such a calculation would represent when compared to the result of the calculation.

Prudent calculation

For the sub-modules identified as immaterial, a calculation of the SCR submodule using inputs prudently estimated and leading to prudent outcomes should be performed at the time of the decision to adopt a proportionate approach. Such calculation should be subject to the consent of the supervisory authority.

The result of such a calculation may then be used in principle for the next three years, after which a full calculation using inputs prudently estimated is required so that the immateriality of the sub-module and the risk-based and proportionate approach is re-assessed.

During the three-year period the key function holder of the actuarial function should express an opinion to the administrative, management or supervisory body of the undertaking on the outcome of immaterial sub-module used for calculating SCR.

Risk management system and ORSA

Such a system should be proportionate to the risks at stake while ensuring a proper monitoring of any evolution of the risk, either triggered by internal sources such as a change in the business model or business strategy or by an external source such as an exceptional event that could affect the materiality of a certain sub-module.

Such a monitoring should include the setting of qualitative and quantitative early warning indicators (EWI), to be defined by the undertaking and embedded in the ORSA processes.

Supervisory reporting and public disclosure

Undertakings should include information on the risk management system in the ORSA Report. Undertakings should include structured information on the sub-modules for which a proportionate approach is applied in the Regular Supervisory Reporting and in the Solvency and Financial Condition Report (SFCR), under the section “E.2 Capital Management – Solvency Capital Requirement and Minimum Capital Requirement”.

Supervisory review process

The approach should be implemented in the context of on-going supervisory dialogue, meaning that the supervisory authority should be satisfied and agree with the approach taken and be kept informed in case of any material change. Supervisory authorities should inform the undertakings in case there is any concern with the approach. In case the supervisory authority has any concern the approach should not be implemented or might be implemented with additional safeguards as agreed between the supervisory authority and the undertaking.

In some situations supervisory authorities may require a full calculation following the requirements of the Delegated Regulation and using inputs prudently estimated.

Example : Supervisory reporting and public disclosure

Undertakings should include information on the risk management system referred to in the previous paragraphs in the ORSA Report.

Undertakings should include structured information on the sub-modules for which a proportionate approach is applied in the Regular Supervisory Reporting, under the section “E.2 Capital Management – Solvency Capital Requirement and Minimum Capital Requirement” (RSR), including at least the following information:

  1. identification of the sub-module(s) for which a proportionate approach was applied;
  2. amount of the SCR for such a sub-module in the last three years before the application of proportionate approach, including the current year;
  3. the date of the last calculation performed following the requirements of the Delegated Regulation using inputs prudently estimated; and
  4. early warning indicators identified and triggers for a calculation following the requirements of the Delegated Regulation and using inputs prudently estimated.

Undertakings should also include structured information on the sub-modules for which a proportionate approach is applied in the Solvency and Financial Condition Report, under the section “E.2 Capital Management – Solvency Capital Requirement and Minimum Capital Requirement” (SFCR), including at least the identification of the submodule(s) for which a proportionate calculation was applied.

An example of structured information to be included in the regular supervisory report in line with Article 311(6) of the Delegated Regulation is as follows:

Proportionality EIOPA

This proportionate approach should also be reflected in the quantitative reporting templates to be submitted. In this case the templates would reflect the amounts used for the last full calculation performed.

Click here to access EIOPA’s Supervisory Statement

Systemic Risk and Macroprudential Policy in Insurance (EIOPA)

In its work, EIOPA followed a step-by-step approach seeking to address the following questions in a sequential way:

  1. Does insurance create or amplify systemic risk?
  2. If yes, what are the tools already existing in the Solvency II framework, and how do they contribute to mitigate the sources of systemic risk?
  3. Are other tools needed and, if yes, which ones could be promoted?

Each paper published addresses one of the questions above. The publication of the three EIOPA papers on systemic risk and macroprudential policy in insurance has constituted an important milestone by which EIOPA has defined its policy stance and laid down its initial ideas on several relevant topics.

This work should now be turned into a specific policy proposal for additional macroprudential tools or measures where relevant and possible as part of the review of Directive 2009/138/EC (the ‘Solvency II5 Review’). For this purpose, and in order to gather the views of stakeholders, EIOPA is publishing this Discussion Paper on systemic risk and macroprudential policy in insurance, which focuses primarily on the third paper, i.e. on potential new tools and measures. Special attention is devoted to the four tools and measures specifically highlighted in the recent European Commission’s Call for Advice to EIOPA.

The financial crisis has shown the need to further consider the way in which systemic risk is created and/or amplified, as well as the need to have proper policies in place to address those risks. So far, most of the discussions on macroprudential policy have focused on the banking sector due to its prominent role in the recent financial crisis.

Given the relevance of the topic, EIOPA initiated the publication of a series of three papers on systemic risk and macroprudential policy in insurance with the aim of contributing to the debate and ensuring that any extension of this debate to the insurance sector reflects the specific nature of the insurance business.

EIOPA followed a step-by-step approach, seeking to address the following questions:

  • Does insurance create or amplify systemic risk? In the first paper entitled ‘Systemic risk and macroprudential policy in insurance’, EIOPA identified and analysed the sources of systemic risk in insurance and proposed a specific macroprudential framework for the sector. If yes, what are the tools already existing in the current framework, and how do they contribute to mitigate the sources of systemic risk? In the second paper, ‘Solvency II tools with macroprudential impact’, EIOPA identified, classified and provided a preliminary assessment of the tools or measures already existing within the Solvency II framework, which could mitigate any of the systemic risk sources that were previously identified.
  • Are other tools needed and, if yes, which ones could be promoted? The third paper carried out an initial assessment of other potential tools or measures to be included in a macroprudential framework designed for insurers. EIOPA focused on four categories of tools (capital and reservingbased tools, liquidity-based tools, exposure-based tools and pre-emptive planning). The paper focuses on whether a specific instrument should or should not be further considered. This is an important aspect in light of future work in the context of the Solvency II review.

The publication of the three EIOPA papers on systemic risk and macroprudential policy in insurance constitutes an important milestone by which EIOPA has defined its policy stance and laid down its initial ideas on several relevant topics. It should be noted that the ESRB (2018) has also identified a shortlist of options for additional provisions, measures and instruments, which reaches broadly similar conclusions as EIOPA.

EIOPA’s work should now be turned into a specific policy proposal for additional macroprudential tools or measures where relevant and possible as part of the Solvency II Review. For this purpose, and in order to gather the views of stakeholders, EIOPA is publishing this Discussion Paper on systemic risk and macroprudential policy in insurance.

This Discussion paper is based on the three papers previously published. They therefore back its content. Interested readers are recommended to consult them for further information or details. Relevant references are included in each of the sections.

EIOPA has included questions on all three papers. The majority of the questions, however, revolve around the third paper on additional tools or measures, which is more relevant in light of the Solvency II review.

The Discussion paper primarily focuses on the “principles” of each tool, trying to explain their rationale. As such, it does not address the operational aspects/challenges of each tool (e.g. calibration, thresholds, etc.) in a comprehensive manner. Similar to the approach followed with other legislative initiatives, the technical details could be addressed by means of technical standards, guidelines or recommendations, once the relevant legal instrument has been enacted.

Definitions

EIOPA provided all relevant definitions in EIOPA (2018a). It has to be noted, however, that there is usually no unique or universal definition for all these concepts. EIOPA’s work did not seek to fill this gap. Instead, working definitions are put forward in order to set the scene and should therefore be considered in the context of this paper only.

  • Financial stability and systemic risk are two strongly related concepts. Financial stability can be defined as a state whereby the build-up of systemic risk is prevented.
  • Systemic risk means a risk of disruption in the financial system with the potential to have serious negative consequences for the internal market and the real economy.
  • Macroprudential policy should be understood as a framework that aims at mitigating systemic risk (or the build-up thereof), thereby contributing to the ultimate objective of the stability of the financial system and, as a result, the broader implications for economic growth.
  • Macroprudential instruments are qualitative or quantitative tools or measures with system-wide impact that relevant competent authorities (i.e. authorities in charge of preserving the stability of the financial system) put in place with the aim of achieving financial stability.

In the context of this paper, these concepts (i.e. tools, instruments and measures) are used as synonyms.

The macroprudential policy approach contributes to the stability of the financial system — together with other policies (e.g. monetary and fiscal) as well as with microprudential policies. Whereas microprudential policies primarily focus on individual entities, the macroprudential approach focuses on the financial system as a whole.

It should be taken into account that, in some cases, the borders between microprudential policies and macroprudential consequences are blurring. That means, for example, that instruments that may have been designed as microprudential instrument may also have macroprudential consequences.

There are different institutional models for the implementation of macroprudential policies across EU, in some cases involving different parties (e.g. ministries, supervisors, etc.). This paper adopts a neutral approach by referring to the generic concept of the ‘relevant authority in charge of the macroprudential policy’, which should encompass the different institutional models existing across jurisdictions. Sometimes a simplified term such as ‘the authorities’ or ‘the competent authorities’ is used.

Systemic risk in insurance

While a common understanding of the systemic relevance of the banking sector has been reached, the issue is still debated in the case of the insurance sector. In order to contribute to this debate, EIOPA developed a conceptual approach to illustrate the dynamics in which systemic risk in insurance can be created or amplified.

Main elements of EIOPA’s conceptual approach to systemic risk

  • Triggering event: Exogenous event that has an impact on one or several insurance companies and may initiate the whole process of systemic risk creation. Examples are macroeconomic factors (e.g. raising unemployment), financial factors (e.g. yield movements) or non-financial factors (e.g. demographic changes or cyber-attacks).
  • Company risk profile: The result of the collection of activities performed by the insurance company. The activities will determine: a) the specific features of the company reflecting the strategic and operational decisions taken; and b) the risk factors that the company is exposed to, i.e. the potential vulnerabilities of the company.
  • Systemic risk drivers: Elements that may enable the generation of negative spill-overs from one or more company-specific stresses into a systemic effect, i.e. they may turn a company specific-stress into a system wide stress.
  • Transmission channels. Contagion channels that explain the process by which the sources of systemic risk may affect financial stability and/or the real economy. EIOPA distinguishes five main transmission channels: a) Exposure channel; b) Asset liquidation channel; c) Lack of supply of insurance products; d) Bank-like channel; and e) Expectations and information asymmetries
  • Sources of systemic risk: they result from the systemic risk drivers and their transmission channels. They are direct or indirect externalities whereby insurance imposes a systemic threat to the wider system. These direct and indirect externalities lead to three potential sources’ categories of systemic risks which are not mutually exclusive, i.e. entity-based related source, activity-based related source and behaviour-based related source.

In essence and as depicted in Figure 1, the approach developed by EIOPA considers that a ‘triggering event’ initially has an impact at entity level, affecting one or more insurers through their ‘risk profile’. Potential individual or collective distresses may generate systemic implications, the relevance of which is determined by the presence of different ‘systemic risk drivers’ embedded in the insurance companies.

EIOPA Sys Risk

In EIOPA’s view, systemic events could be generated in two ways.

  1. The ‘direct’ effect, originated by the failure of a systemically relevant insurer or the collective failure of several insurers generating a cascade effect. This systemic source is defined as ‘entity-based’.
  2. The ‘indirect’ effect, in which possible externalities are enhanced by engagement in potentially systemic activities (activity-based sources) or the widespread common reactions of insurers to exogenous shocks (behaviour-based source).

Potential externalities generated via direct and indirect sources are transferred to the rest of the financial system and to the real economy via specific channels (i.e. the transmission channel) and could induce changes in the risk profile of insurers, eventually generating potential second-round effects.

The following table provides an overview of possible examples of triggering events, risk profile, systemic risk drivers and transmission channels. It should therefore not be considered as a comprehensive list of elements.

EIOPA MacroPrud

Potential macroprudential tools and measures to enhance the current framework

In its third paper, EIOPA (2018c) carried out an analysis focusing on four categories of tools:

a) Capital and reserving-based tools;

b) Liquidity-based tools;

c) Exposure-based tools; and

d) Pre-emptive planning.

EIOPA also considers whether the tools should be used for enhanced reporting and monitoring or as intervention power. Following this preliminary analysis, EIOPA concluded the following :

EIOPA Other tools

Example: Enhancement of the ORSA 

Description. In an ORSA, an insurer is required to consider all material risks that may have an impact on its ability to meet its obligations to policyholders. In doing this a forward looking perspective is also required. Although conceived at first as a microprudential tool, this tool could be enhanced to take the macroprudential perspective also into account.

Potential contribution to mitigate systemic risk. The enhancement of ORSA could help in mitigating two of the sources of systemic risk identified.

Proposal. This measure is proposed for further consideration for enhanced reporting and monitoring purposes.

Operational aspects. A description of all relevant operational aspects is carried out in EIOPA (2018c). In essence, the idea is to supplement the microprudential approach by assigning certain roles and responsibilities to the relevant authority in charge of the macroprudential policy (see Figure below). This authority could carry out three different tasks:

  1. Aggregation of information;
  2. Analysis of the information; and
  3. Provision of certain information or parameters to supervisors to channel macroprudential concerns.

Supervisors would then request undertakings to include in their ORSAs particular macroprudential risks.

Issues for consideration: In order to make the ORSA operational from a macroprudential point of view, the following would be needed:

  • A clarification of the role of the risk management function in order to include macroprudential concerns.
  • The inclusion of a new paragraph in Article 45 of the Solvency II directive explicitly referring to the macroprudential dimension and the need to consider the macroeconomic situation and potential sources of systemic risk as followup of their assessment on whether the company complies on a continuous basis with the Solvency II regulatory capital requirements.
  • Clarification that a follow-up is expected after input from supervisors, namely from authorities in charge of the macroprudential policy. On a risk-based approach this might imply the request of specific information in terms of nature, scope, format and point in time, where justified by likelihood or impact of materialisation of a certain source of systemic risk.

Furthermore, a certain level of harmonisation of the structure and content of the ORSA report would be needed, which would enable the identification of the relevant sections by the authorities in charge of macroprudential policies. This, however, would mean a change in the current approach followed with regard to the ORSA.

Click here to access EIOPA’s detailed Discussion Paper 2019

 

Outsourcing to the Cloud: EIOPA’s Contribution to the European Commission FinTech Action Plan

In the European financial regulatory landscape, the purchase of cloud computing services falls within the broader scope of outsourcing.

The credit institutions, investment firms, payment institutions and the e-money institutions have multiple level 1 and level 2 regulations that discipline their use of outsourcing (e.g. MIFID II, PSD2, BRRD). There are also level 3 measures: CEBS Guidelines on Outsourcing, representing the current guiding framework for outsourcing activities within the European banking sector.

Additional “Recommendations on cloud outsourcing” were issued on December 20, 2017 by the European Banking Authority (EBA) and entered into force on July 1, 2018. They will be repealed by the new guidelines on Outsourcing Arrangements (level 3) which have absorbed the text of the Recommendations.

For the (re)insurance sector, the current Regulatory framework of Solvency II (level 1 and level 2) discipline outsourcing under Articles 38 and 49 of the Directive and Article 274 of the Delegated Regulations. The EIOPA guidelines 60-64 on System of Governance provide level 3 principle based guidance.

On the basis of a survey conducted by the National Supervisory Authorities (NSAs), cloud computing is not extensively used by (re)insurance undertakings: it is most extensively used by newcomers, within a few market niches and by larger undertakings mostly for non-critical functions.

Moreover, as part of their wider digital transformation strategies many European large (re)insurers are expanding their use of the cloud.

As to applicable regulation, cloud computing is considered as outsourcing and the current level of national guidance on cloud outsourcing for the (re)insurance sector is not homogenous. Nonetheless, most NSAs (banking and (re)insurance supervisors at the same time) declare that they are considering the EBA Recommendations as a reference for the management of cloud outsourcing.

Under the steering of its InsurTech TaskForce, EIOPA will develop its own Guidelines on Cloud Outsourcing. The intention is that the Guidelines on Cloud Outsourcing (the “guidelines”) will be drafted during the first half of 2019, issued then for consultation and finalised by the end of the year.

During the process of drafting the Guidelines, EIOPA will organize a public roundtable on the use of cloud computing by (re)insurance undertakings. During the roundtable, representative from the (re)insurance industry, cloud service providers and the supervisory community will discuss views and approaches to cloud outsourcing in a Solvency II and post-EBA Recommendations environment.

Furthermore, in order to guarantee a cross-industry harmonization within the European
financial sector, EIOPA has agreed with the other two ESAs:

  • to continue keeping the fruitful alignment kept so far; and
  • to start – in the second part of 2019 – a joint market monitoring activity aimed at developing policy views on how cloud outsourcing in the finance sector should be treated in the future.

This should take into account the increasing use of the cloud and the potential for large cloud service providers to be a single point of failure.

Overview of Cloud Computing

Cloud computing allows users to access on-demand, shared configurable computing resources (such as networks, servers, storage, applications and services) hosted by third parties on the internet, instead of building their own IT infrastructure.

According to the US National Institute of Standards and Technology (NIST), cloud computing is: “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”.

The ISO standard of 2014 defines cloud computing as a: “paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand”. It is composed of

  • cloud computing roles and activities,
  • cloud capabilities types and cloud service categories,
  • cloud deployment models and
  • cloud computing cross cutting aspects”.

The European Banking Authority (EBA) Recommendations of 2017 – very close to NIST definition – defines the cloud services as: “Services provided using cloud computing, that is, a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Shared responsibility framework

The cloud provider and cloud customer share the control of resources in a cloud system. The cloud’s different service models affect their control over the computational resources and, thus, what can be done in a cloud system. Compared to traditional IT systems, where one organization has control over the whole stack of computing resources and the entire life-cycle of the systems, cloud providers and cloud customers collaboratively

  • design,
  • build,
  • deploy, and
  • operate

cloud based systems.

The split of control means that both parties share the responsibilities in providing adequate protections to the cloud-based systems. The picture below shows, as “conceptual model”, the different level of sharing responsibilities between the cloud provider and the cloud customer.

These responsibilities contribute to achieve a compliant and secure computing environment. It has to be noted that, regardless the service provided by the cloud provider:

  • Ensuring that the data and its classification are done correctly and that the solution is compliant with regulatory obligations is the responsibility of the customer (e.g. in case of data theft the cloud customer is responsible towards the damaged parties or the customer is responsible to ensure – e.g. with specific contractual obligations – that the provider observe certain compliance requirements such as give the competent authorities access and audit rights);
  • Physical security is the one responsibility that is wholly owned by cloud service providers when using cloud computing.

The remaining responsibilities and controls are shared between customers and cloud providers according to the outsourcing model. However, the responsibility (in a supervisory sense) remains with the customers. Some responsibilities require the cloud provider and customer to manage and administer the responsibility together including auditing of their domains. For example, identity & access management when using a cloud provider’s active directory services could require that the configuration of services such as multi-factor authentication is up to the customer, but ensuring effective functionality is the responsibility of the cloud provider.

EIOPA Outs

Summary of Key Takeaways and EIOPA’s Answer to the European Commission

The key takeaways of the analysis carried out and described within this document are the following:

  1. cloud computing is mostly used extensively by newcomers, by a niche of the market and by larger undertakings mostly for non-critical function. However, as part of their wider digital transformation strategies many European large (re)insurers are expanding their use of the cloud;
  2. the current Regulatory framework of Solvency II (level 1 and level 2) appears to be sound to discipline the outsourcing to the cloud by the current outsourcing provisions (Articles 38 and 49 of the Directive and Article 274 of the Delegated Regulations);
  3. cloud computing is a fast developing service so in order for its regulation to be efficient it should be principle-based rather than attempting at regulating all (re)insurance-related aspects of it;
  4. cloud computing services used by (re)insurance undertakings are aligned to the one used by banking sector. The risks arising from the usage of cloud computing by (re)insurance undertakings appear to be, generally, aligned to the risks bear by the banking players with few minor (re) insurance specificities;
  5. both banking and (re)insurance regulations discipline cloud computing by their current outsourcing provisions. Under these, banking and (re)insurance institutions are required to classify whether the cloud services they receive are „critical or important“. The most common approach is to classify cloud computing on a case-by-case approach – similarly to the other services – on the basis of the service / process / activity / data outsourced;
  6. the impact of cloud computing on the (re)insurance market is assessed differently among jurisdictions: due to the complexity and the high level of technicality of the subject, some jurisdictions have planned to issue (or already issued) national guidance directly applicable to the (re)insurance market on cloud outsourcing;
  7. from the gap analysis carried out, the EBA Recommendations are more specific on the subject (e.g. the specific requirements to build a register of all the cloud service providers) and, being built on shared common principles, can be applied to the wide Solvency II regulations on outsourcing, reflecting their status at level 3;
  8. to provide legal transparency to the market participants (i.e. regulated undertakings and service providers) and to avoid potential regulatory arbitrage, EIOPA should issue guidance on cloud outsourcing aligned with the EBA Recommendations and, where applicable, the EBA Guidelines on outsourcing arrangements with minor amendments.

Click here to access EIOPA’s detailed Contribution Paper

The strategies shaping private equity in 2019 and beyond

For the past several years, fund managers have faced virtually the same challenge: how to put record amounts of raised capital to work productively amid heavy competition for assets and soaring purchase price multiples. Top performers recognize that the only effective response is to get better—and smarter.

We’ve identified four ways leading firms are doing so.

  • A growing number of (General Partners) GPs are facing down rising deal multiples by using buy-and-build strategies as a form of multiple arbitrage—essentially scaling up valuable new companies by acquiring smaller, cheaper ones.
  • The biggest firms, meanwhile, are beating corporate competitors at their own game by executing large-scale strategic mergers that create value out of synergies and combined operational strength.
  • GPs are also discovering the power of advanced analytics to shed light on both value and risks in ways never before possible.
  • And they are once again exploring adjacent investment strategies that take advantage of existing capabilities, while resisting the temptation to stray too far afield.

Each of these approaches will require an investment in new skills and capabilities for most firms. Increasingly, however, continuous improvement is what separates the top-tier firms from the rest.

Buy-and-build: Powerful strategy, hard to pull off

While buy-and-build strategies have been around as long as private equity has, they’ve never been as popular as they are right now. The reason is simple: Buy-and-build can offer a clear path to value at a time when deal multiples are at record levels and GPs are under heavy pressure to find strategies that don’t rely on traditional tailwinds like falling interest rates and stable GDP growth. Buying a strong platform company and building value rapidly through well-executed add-ons can generate impressive returns.

As the strategy becomes more and more popular, however, GPs are discovering that doing it well is not as easy as it looks. When we talk about buy-and-build, we don’t mean portfolio companies that pick up one or two acquisitions over the course of a holding period. We also aren’t referring to onetime mergers meant to build scale or scope in a single stroke. We define buy-and-build as an explicit strategy for building value by using a well-positioned platform company to make at least four sequential add-on acquisitions of smaller companies. Measuring this activity with the data available isn’t easy. But you can get a sense of its growth by looking at add-on transactions. In 2003, just 21% of all add-on deals represented at least the fourth acquisition by a single platform company. That number is closer to 30% in recent years, and in 10% of the cases, the add-on was at least the 10th sequential acquisition.

Buy-and-build strategies are showing up across a wide swath of industries (see Figure 2.2). They are also moving out of the small- to middle-market range as larger firms target larger platform companies (see Figure 2.3). They are popular because they offer a powerful antidote to soaring deal multiples. They give GPs a way to take advantage of the market’s tendency to assign big companies higher valuations than smaller ones (see Figure 2.4). A buy-and-build strategy allows a GP to justify the initial acquisition of a relatively expensive platform company by offering the opportunity to tuck in smaller add-ons that can be acquired for lower multiples later on. This multiple arbitrage brings down the firm’s average cost of acquisition, while putting capital to work and building additional asset value through scale and scope. At the same time, serial acquisitions allow GPs to build value through synergies that reduce costs or add to the top line. The objective is to assemble a powerful new business such that the whole is worth significantly more than the parts.

Having coinvested in or advised on hundreds of buy-and-build deals over the past 20 years, we’ve learned that sponsors tend to underestimate what it takes to win. We’ve seen buy-and-build strategies offer firms a number of compelling paths to value creation, but we’ve also seen these approaches badly underperform other strategies. Every deal is different, of course, but there are patterns to success.

The most effective buy-and-build strategies share several important characteristics.

Too many attempts at creating value through buy-and-build founder on the shoals of bad planning. What looks like a slam-dunk strategy rarely is. Winning involves assessing the dynamics at work in a given sector and using those insights to weave together the right set of assets. The firms that get it right understand three things going in:

  • Deep, holistic diligence is critical. In buy-and-build, due diligence doesn’t start with the first acquisition. The most effective practitioners diligence the whole opportunity, not just the component parts. That means understanding how the strategy will create value in a given sector using a specific platform company to acquire a well-defined type of add-on. Are there enough targets in the sector, and is it stable enough to support growth? Does the platform already have the right infrastructure to make acquisitions, or will you need to build those capabilities? Who are the potential targets, and what do they add? Deep answers to questions like these are a necessary prerequisite to evaluating the real potential of a buy-and-build thesis.
  • Execution is as important as the investment. Great diligence leads to a great playbook. The best firms have a clear plan for what to buy, how to integrate it, and what roles fund management and platform company leadership will play. This starts with building a leadership team that is fit for purpose. It also means identifying bottlenecks (e.g., IT systems, integration team) and addressing them quickly. There are multiple models that can work—some rely on extensive involvement from deal teams, while others assume strong platform management will take the wheel. But given the PE time frame, the imperative is to have a clear plan up front and to accelerate acquisition activity during what inevitably feels like a very short holding period.
  • Pattern recognition counts. Being able to see what works comes with time and experience. Learning, however, relies on a conscious effort to diagnose what worked well (or didn’t) with past deals. This forensic analysis should include the choice of targets, as well as how decisions along each link of the investment value chain (either by fund management or platform company management) created or destroyed value. Outcomes improve only when leaders use insights from past deals to make better choices the next time.

At a time when soaring asset prices are dialing up the need for GPs to create value any way they can, an increasing number of firms are turning to buy-and-build strategies. The potential for value creation is there; capturing it requires

  • sophisticated due diligence,
  • a clear playbook,
  • and strong, experienced leadership.

Bain1

Merger integration: Stepping up to the challenge

PE funds are increasingly turning to large-scale M&A to solve what has become one of the industry’s most intractable problems—record amounts of money to spend and too few targets. GPs have put more money to work over the past five years than during any five-year period in the buyout industry’s history. Still, dry powder, or uncalled capital, has soared 64% over the same period, setting new records annually and ramping up pressure on PE firms to accelerate the pace of dealmaking.

One reason for the imbalance is hardly a bad problem: Beginning in 2014, enthusiastic investors have flooded buyout funds with more than $1 trillion in fresh capital. Another issue, however, poses a significant conundrum: PE firms are too often having to withdraw from auctions amid fierce competition from strategic corporate buyers, many of which have a decided advantage in bidding. Given that large and mega-buyout funds of $1.5 billion or more hold two-thirds of the uncalled capital, chipping away at the mountain of dry powder will require more and bigger deals by the industry’s largest players (see Figure 2.6). Very large public-to-private transactions are on the rise for precisely this reason.

But increasingly, large funds are looking to win M&A deals by recreating the economics that corporate buyers enjoy. This involves using a platform company to hunt for large-scale merger partners that add strategic value through scale, scope or both.

Making it all work, of course, is another matter. Large-scale, strategic M&A solves one problem for large PE firms by putting a lot of capital to work at once, but it also creates a major challenge: capturing value by integrating two or more complex organizations into a bigger one that makes strategic and operational sense. Bain research shows that, while there is clear value in making acquisitions large enough to have material impact on the acquirer, the success rate is uneven and correlates closely to buyer experience (see Figure 2.7). The winners do this sort of deal relatively frequently and turn large-scale M&A into a repeatable model. The laggards make infrequent big bets, often in an attempt to swing for the fences strategically. Broken deals tend to fail because firms stumble over merger integration. They enter the deal without an integration thesis or try to do everything at once. They don’t identify synergies with any precision, or fail to capture the ones they have identified. GPs neglect to sort out leadership issues soon enough, or they underestimate the challenge of merging systems and processes. For many firms, large-scale merger integration presents a steep learning curve.

In our experience, success in a PE context requires a different way of approaching three key phases of the value-creation cycle:

  • due diligence,
  • the post-announcement period
  • and the post-close integration period (see Figure 2.8).

In many ways, what happens before the deal closes is almost as important as what happens after a firm assumes ownership. Top firms invest in deep thinking about integration from the outset of due diligence. And they bring a sharp focus to how the firm can move quickly and decisively during the holding period to maximize time to value.

In a standalone due diligence process, deal teams focus on a target’s market potential, its competitiveness, and opportunities to cut costs or improve performance. In a merger situation, those things still matter, but since the firm’s portfolio company should have a good understanding of the market already, the diligence imperative switches to a bottom-up assessment of the potential synergies:

  • Measuring synergies. Synergies typically represent most of a merger deal’s value, so precision in underwriting them is critical. High-level benchmarks aren’t sufficient; strong diligence demands rigorous quantification. The firm has to decide which synergies are most important, how much value they represent and how likely they are to be captured within the deal’s time frame. A full understanding of the synergies available in a deal like this allows a firm to bid as aggressively as possible. It often gives the deal team the option to share the value of synergies with the seller in the form of a higher acquisition price. On the other hand, the team also needs to account for dis-synergies—the kinds of negative outcomes that can easily lead to value destruction.
  • Tapping the balance sheet. One area of potential synergies often underappreciated by corporate buyers is the balance sheet. Because companies in the same industry frequently share suppliers and customers, combining them presents opportunities to negotiate better contracts and improve working capital. There might also be a chance to reduce inventory costs by pooling inventory, consolidating warehouses or rationalizing distribution centers. At many target companies, these opportunities represent low-hanging fruit, especially at corporate spin-offs, since parent companies rarely manage the working capital of individual units aggressively. Combined businesses can also trim capital expenditures.
  • Managing the “soft” stuff. While these balance sheet issues play to a GP’s strong suit, people and culture issues usually don’t. PE firms aren’t known for their skill in diagnosing culture conflicts, retaining talent or working through the inevitable HR crises raised by integration. Firms often view these so-called soft issues as secondary to the things they can really measure. Yet people problems can quickly undermine synergies and other sources of value, not to mention overall performance of the combined company. To avoid these problems, it helps to focus on two things in due diligence. First, which of the target company’s core capabilities need to be preserved, and what will it take to retain the top 10 people who deliver them? Second, does the existing leadership team—on either side of the transaction—understand how to integrate a business? The firm needs to know whether those responsible for leading the integration have done it before, whether they’ve been successful and whether the firm can trust them to do it successfully in this situation. PE owners are often more involved in integration than the board of a typical corporation. It’s important not to overstep, however. Bigfooting the management team is a sure way to spur a talent exodus. For PE firms eager to put money to work, great diligence in a merger context is critical. It should not only answer questions such as “How much value can we underwrite?” but also evaluate whether to do the deal at all. Deal teams have to resist the urge to make an acquisition simply because the clock is ticking. Corporate buyers often take years to identify and court the right target. While it’s true that PE firms rarely have that luxury, no amount of merger integration prowess can make up for acquiring a company that just doesn’t fit.

Once the hard work of underwriting value and generating a robust integration thesis is complete, integration planning begins in earnest. A successful integration has three major objectives:

  • capturing the identified value,
  • managing the people issues,
  • and integrating processes and systems (see Figure 2.9).

This is where the Integration Management Office (IMO) needs to shine. As the central leadership office, its role is to keep the integration effort on track and to hit the ground running on day one. Pre- and post-close, the IMO

  • monitors risks (including interdependences),
  • tracks and reports on team progress,
  • resolves conflicts,
  • and works to achieve a consistent drumbeat of decisions and outcomes.

It manages dozens of integration teams, each with its own detailed work plan, key performance indicators and milestones. It also communicates effectively to all stakeholders.

  • Capturing value. An often-underappreciated aspect of the early merger integration process is the art of maintaining continuity in the base business. Knitting together the two organizations and realizing synergies is essential, but value can be lost quickly if a chaotic integration process gets in the way of running the core. Management needs to reserve focus for day-to-day operations, keeping close tabs on customers and vendors, and intervening quickly if problems crop up. At the same time, it is important to validate and resize the value-creation initiatives and synergies identified in diligence. The team has to create a new value roadmap that articulates in detail the value available and how to capture it. This document redefines the size of the prize based on real data. It should be cascaded down through the organization to inform detailed team-level work plans.
  • Tackling the people challenge. Integrating large groups of people is very often the most challenging— and overlooked—aspect of bringing two companies together. Mergers are emotionally charged events that take employees out of their comfort zone. While top leadership may be thinking about pulling the team together to find value, the people on the ground, understandably, are focused on what it means for them. The change disrupts everybody; nobody knows what’s coming, and human nature being what it is, people often shut down. Getting ahead of potential disaster involves three critical areas of focus:
    • retaining key talent,
    • devising a clear operating model
    • and solving any culture issues.

Talent retention boils down to identifying who creates the most value at the company and understanding what motivates them. Firms need to isolate the top 50 to 100 individuals most responsible for the combined company’s value and devise a retention plan tailored to each one. Keeping these people on board will likely involve financial incentives, but it may be more important to present these stars with a clear vision for the future and how they can bring it to life by excelling in mission-critical roles. It is also essential to be decisive and fair in making talent decisions (see Figure 2.10). Assigning these roles is an outgrowth of a larger challenge: devising a fit-for-purpose operating model that aligns with the overall vision for the company. This is the set of organizational elements that helps translate business strategy into action. It defines roles, reporting relationships and decision rights, as well as accountabilities. Whether this new model works will have a lot to do with how well leadership manages the cultural integration challenge. Nothing can destroy value faster than internal dysfunction, but getting it right can be a delicate exercise.

  • Processes and systems. The final integration imperative—designing and implementing the new company’s processes and systems—is all about anticipating how things will get done in the new company and building the right infrastructure to support that activity. PE firms must consider which processes to integrate and which to leave alone. The north star on these decisions is which efforts will directly accrue to value within the deal time frame and which can wait. Often, this means designing an interim and an end-state solution, ensuring delivery of critical functionality now while laying the foundation for the optimal long-term solution. Integrating IT systems requires a similar decision-making process, focused on what will create the most value. If capturing synergies in the finance department involves cutting headcount within several financial planning and analysis teams, that might only happen when they are on a single system. Likewise, if the optimal operating model calls for a fully integrated sales and marketing team, then working from a single CRM system makes sense. Most PE firms are hyperfocused on the expense involved in these sorts of decisions. They weigh the onetime costs of integration against a sometimes-vague potential return and ultimately decide not to push forward. This may be a mistake. Taking a more expansive view of potential value often pays off. Early investments in IT, for instance, may look expensive in the short run. But to the extent that they make possible future investments in better capabilities or continued acquisitions, they can be invaluable.

Bain2

Bain3

Adjacency strategy: Taking another shot at diversification

Given the amount of capital gushing into private equity, it’s not surprising that PE firms are diversifying their fund offerings by launching new strategies. The question is whether this wave of diversification can produce better results than the last one. History has shown that expanding thoughtfully into the right adjacencies can deliver great results. But devoting time, capital and talent to strategies that stray too far afield can quickly sap performance.

In the mid-1990s, the industry faced a similar challenge in putting excess capital to work. As institutions and other large investors scoured the investment landscape for returns, they increased allocations to alternative investments, including private equity. Larger PE funds eagerly took advantage of the situation by branching into different geographies and asset classes. This opened up new fee and revenue streams, and allowed the funds to offer talented associates new opportunities. Funds first expanded geographically, typically by crossing the Atlantic from the US to Europe, then extending into Asia and other regions by the early 2000s (see Figure 2.12). Larger firms next began to experiment with asset class diversification, creating

  • growth and venture capital funds,
  • real estate funds,
  • mezzanine financing
  • and distressed debt vehicles.

Many PE firms found it more challenging to succeed in new geographies and especially in different asset classes. Credit, infrastructure, real estate and hedge funds held much appeal, in part because they were less correlated with equity markets and offered new pools of opportunity. But critically, most of these asset classes also required buyout investors to get up to speed on very different capabilities, and they offered few synergies. Compared with buyouts, most of these adjacent asset classes had a different investment thesis, virtually no deal-sourcing overlap, little staff or support-function cost sharing, and a different Limited Partners (LP) risk profile. To complicate matters, PE firms found that many of these adjacencies offered lower margins than their core buyout business. Some came with lower fees, and others did not live up to performance targets. Inherently lower returns for LPs made it difficult to apply the same fee structures as for traditional buyouts. To create attractive total economics and pay for investment teams, PE firms needed to scale up some of these new products well beyond what they might do in buyouts. That, in turn, threatened to change the nature of the firm.

For large firms that ultimately went public, like KKR, Blackstone and Apollo, the shift in ownership intensified the need to produce recurring, predictable streams of fees and capital gains. Expanding at scale in different asset classes became an imperative. And today, buyouts represent a minority of their assets under management.

As other firms pursued diversification, however, the combination of different capabilities and lower returns wasn’t always worth the trade-off. When the global financial crisis hit, money dried up, causing funds to retrench from adjacencies that did not work well—either because of a lack of strategic rationale or because an asset class struggled overall. Of the 100 buyout firms that added adjacencies before 2008 (roughly 1 in 10 firms active then), 20% stopped raising capital after the crisis, and nearly 65% of those left had to pull out from at least one of their asset classes (see Figure 2.13).

Diversification, it became clear, was trickier to navigate than anticipated. Succeeding in any business that’s far from a company’s core capabilities presents a stiff challenge—and private equity is no different. To test this point, we looked at a sample of funds launched between 1998 and 2013 by 184 buyout firms for which we had performance data, each of which had raised at least $1.5 billion during that period. We found that, when it comes to maintaining a high level of returns, staying close to the core definitely matters. Our study defined “core/near-in” firms as those that dedicated at least 90% of their raised capital to buyouts and less than 5% to adjacencies (including infrastructure, real estate and debt). We compared them to firms that moved further away from the core (dedicating more than 5% to adjacencies). The results: On average, 28% of core/near-in firms’ buyout funds generated top-quartile IRR performance, vs. 21% for firms that moved further afield (see Figure 2.14). The IRR gap for geographic diversification is more muted, because making such moves is generally easier than crossing asset types. But expanding into a new country or region does require developing or acquiring a local network, as well as transferring certain capabilities. And the mixed IRR record that we identified still serves as a caution: Firms need to be clear on what they excel at and exactly how their strengths could transfer to adjacent spaces.

With a record amount of capital flowing into private equity in recent years, GPs again face the question of how to deploy more capital through diversification. While a few firms, such as Hellman & Friedman, remain fully committed to funding their core buyout strategy, not many can achieve such massive scale in one asset class. As a result, a new wave of PE products is finding favor with both GPs and LPs. Top performers are considering adjacencies that are one step removed from the core, rather than two or three steps removed. The best options take advantage of existing platforms, investment themes and expertise. They’re more closely related to what PE buyout firms know how to do, and they also hold the prospect of higher margins for the GP and better net returns for LPs. In other words, these new products are a different way to play a familiar song.

There are any number of ways for firms to diversify, but several stand out in today’s market (see Figure 2.15):

  • Long-hold funds have a life span of up to 15 years or so, offering a number of benefits. Extending a fund’s holding period allows PE firms to better align with the longer investment horizon of sovereign wealth funds and pension funds. It also provides access to a larger pool of target companies and allows for flexibility on exit timing with fewer distractions. These funds represent a small but growing share of total capital.
  • Growth equity funds target minority stakes in growing companies, usually in a specific sector such as technology or healthcare. Though the field is getting more crowded, growth equity has been attractive given buyout-like returns, strong deal flow and less competition than for other types of assets. Here, a traditional buyout firm can transfer many of its core capabilities. Most common in Asia, growth equity has been making inroads in the US and Europe of late.
  • Sector funds focus exclusively on one sector in which the PE firm has notable strengths. These funds allow firms to take advantage of their expertise and network in a defined part of the investing landscape.
  • Mid-market funds target companies whose enterprise value typically ranges between $50 million and $500 million, allowing the firm to tap opportunities that would be out of scope for a large buyout fund.

All of the options described here have implications for a PE firm’s operating model, especially in terms of retaining talent, communicating an adjacency play to LPs, avoiding cannibalization of the firm’s traditional buyout funds and sorting out which deal belongs in which bucket.

GPs committed to adjacency expansion should ask themselves a few key questions:

  • Do we have the resident capabilities to execute well on this product today, or can we add them easily?
  • Does the asset class leverage our cost structure?
  • Do our customers—our LPs—want these new products?
  • Can we provide the products through the same channels?
  • Have we set appropriate expectations for the expansion, both for returns and for investments?

Clear-eyed answers to these questions will determine whether, and which, adjacencies make sense. The past failures and retrenchments serve as a reminder that investing too far afield risks distracting GPs from their core buyout funds. Instead, a repeatable model consists of understanding which strengths a fund can export and thoughtfully mapping those strengths to the right opportunities (see Figure 2.16).

Adjacency expansion will remain a popular tack among funds looking for alternative routes to put their capital to work. Funds that leverage their strengths in a disciplined, structured way stand the best chance of reaping healthy profits from expansion.

Bain4

Bain5

Advanced analytics: Delivering quicker and better insights

At a time when PE firms face soaring asset prices and heavy competition for deals, advanced analytics can help them derive the kinds of proprietary insights that give them an essential edge against rivals. These emerging technologies can offer fund managers rapid access to deep information about a target company and its competitive position, significantly improving the firm’s ability to assess opportunities and threats. That improves the firm’s confidence in bidding aggressively for companies it believes in—or walking away from a target with underlying issues.

What’s clear, however, is that advanced analytics isn’t for novices. Funds need help in taking advantage of these powerful new tools. The technology is evolving rapidly, and steady innovation creates a perplexing array of options. Using analytics to full advantage requires staying on top of emerging trends, building relationships with the right vendors, and knowing when it makes sense to unleash teams of data scientists, coders and statisticians on a given problem. Bain works with leading PE firms to sort through these issues, evaluate opportunities and build effective solutions. We see firms taking advantage of analytics in several key areas.

Many PE funds already use scraping tools to extract and analyze data from the web. Often, the goal is to evaluate customer sentiment or to obtain competitive data on product pricing or assortment. New tools make it possible to scrape the web much more efficiently, while gaining significantly deeper insights. Deployed properly, they can also give GPs the option to build proprietary databases over time by gathering information daily, weekly or at other intervals. Using a programming language such as Python, data scientists can direct web robots to search for and extract specific data much more quickly than in the past (see Figure 2.17). With the right code and the right set of target websites, new tools can also allow firms to assemble proprietary databases of historical information on pricing, assortment, geographic footprint, employee count or organizational structure. Analytics tools can access and extract visible and hidden data (metadata) as frequently as fund managers find useful.

Most target companies these days sell through online channels and rely heavily on digital marketing. Fewer do it well. The challenge for GPs during due diligence is to understand quickly if a target company could use digital technology more effectively to create new growth opportunities. Post-acquisition, firms often need similar insights to help a portfolio company extract more value from its digital marketing strategy. Assessing a company’s digital positioning—call it a digital X-ray—is a fast and effective way to gain these insights. For well-trained teams, it requires a few hours to build the assessment, and it can be done from the outside in—before a fund even bids. It is also relatively easy to ask for access to a target company’s Google AdWords and Google Analytics platforms. That can produce a raft of digital metrics and further information on the target’s market position.

One challenge for PE funds historically has been accessing data from large networks or from scattered and remote locations. But new tools let deal teams complete such efforts in a fraction of the time and cost.

One issue that PE deal teams often ponder in evaluating companies is traffic patterns around retail networks, manufacturing facilities and transport hubs. Is traffic rising or declining? What’s the potential to increase it? In some industries, it’s difficult to track such data, especially for competitors. But high-definition satellite images or drones can glean insights from traffic flows over time.

Another advantage of analytics tools is the ability to see around corners, helping fund managers anticipate how disruptive new technologies or business models may change the market. Early signs of disruption are notoriously hard to quantify. Traditional measures such as client satisfaction or profitability won’t ring the warning bells soon enough. Even those who know the industry best often fail to anticipate technological disruptions. With access to huge volumes of data, however, it’s easier to track possible warning signs, such as the level of innovation or venture capital investment in a sector. That’s paved the way for advanced analytics tools that allow PE funds to spot early signals of industry disruption, understand the level of risk and devise effective responses. These insights can be invaluable, enabling firms to account for disruption as they formulate bidding strategies and value-creation plans.

These are just a few of the ways that PE firms can apply advanced analytics to improve deal analysis and portfolio company performance. We believe that the burst of innovation in this area will have profound implications for how PE funds go about due diligence and manage their portfolio companies. But most funds will need to tap external expertise to stay on top of what’s possible. A team-based approach that assembles the right expertise for a given problem helps ensure that advanced analytics tools deliver on their promise.

Bain6

Click here to access Bain’s Private Equity Report 2019

How to Transform Your CX Strategy with AI

Consumers have more ways than ever to communicate with the brands they buy — be it through private chat or in public on social media sites such as Twitter. If a conversation conveys a negative sentiment, it can be detrimental if it’s not addressed quickly. Many companies are leaning on early stage AI tools for help.

Companies can use artificial intelligence in customer service to build a brand that’s associated with excellent customer experience (CX). This is critically important in an era in which consumers can easily compare product prices on the web, said Gene Alvarez, a Gartner managing VP, during a September 2018 webinar in which analysts discussed ways artificial intelligence in customer service can drive business growth. « When your price is equal, what’s left? Your customer experience, » Alvarez said. « If you deliver a poor customer experience, they’ll go with the company that delivers a good one. This has created a challenge for organizations trying to take on the behemoths who are doing well with customer experience, with the challenge being scale. »

AI in customer service enables companies to understand what their customers are doing today and to quickly scale CX strategies in response. Chatbots can be deployed relatively quickly to handle customer requests around the clock, while social listening tools can track customer sentiment online to gain insight, identify potential new customers, and take proactive action to protect and grow brands.

With that, AI technologies including text analytics, sentiment analysis, speech analytics and natural language processing all play an increasingly important role in customer experience management. By 2021, 15% of all customer service interactions will be handled by AI — that’s 400% higher than in 2017, according to Gartner.

Where AI for customer service makes sense

With the current hype around AI, companies may rush into projects without thinking about how artificial intelligence can help execute their vision for customer experience — if it’s appropriate at all, Alvarez said.

« Organizations have to ask the question, ‘How will I use AI to build the next component of my vision in terms of execution from a strategy perspective?‘ [and] not just try AI at scattershot approaches, » he said. « Look for moments of truth in the customer experience and say, ‘This is a good place to try [AI] because it aligns with our vision and strategy and the type of customer experience we want to deliver.' »

For example, an extraordinary number of companies have deployed chatbots or virtual assistants or are in the process of deploying them. Twenty-five percent of customer service and support operations will integrate bot technology across their engagement channels by 2020, up from less than 2% in 2017, Gartner reported.

But chatbots certainly aren’t the right choice for all companies. Customers who shop a luxury brand may expect a higher level of personalized customer service; self-service models and chatbots aren’t appropriate for customers who expect their calls to be answered by a person, Alvarez said.

And it’s no secret that virtual agents haven’t delivered the success companies hoped for with AI in customer service, said Brian Manusama, Gartner research director, in the webinar. All the experimentation with chatbots and virtual agents has, in some cases, hurt the customer experience instead of contributing to it. Companies have a long way to go to learn which technologies to use for the right use cases, he said.

« Companies really getting into [AI for CX] are disproportionally getting rewarded for it while companies that don’t do well with it are getting disproportionally punished for it, » Manusama said.

Match the product to the CX

The first step in choosing software for artificial intelligence in customer service is to understand that there is no single tool that works for every customer in every scenario, said Whit Andrews, an analyst at Gartner. For example, a customer who buys an inexpensive product may be fine interacting with a chatbot about that purchase, but not other types of purchases, he said.

« You have to identify the people who want to work with a chatbot and be realistic about the fact that if someone says they’d rather work with a chatbot, they might mean that for one situation but not another, » Andrews said.

To put a finer point on it, Jessica Ekholm, a Gartner research VP, advised companies to « pick the right battles » with AI tools by examining where the customer pain points are and developing a CX strategy that uses artificial intelligence in customer service strategically.

Cohesive AI in CRM strategies requires a singular 360 view

AI in CRM today is like mobile in the 1990s and social media channels in the 2000s, according to Jeff Nicholson, vice president of CRM product marketing at Pegasystems: It seems everyone wants a piece of the pie.

« Companies are anxious to deploy AI, so they try a little over here, maybe a little over there, just to keep up, » Nicholson said. « Before you know it, you’ve created another stack of silos across the enterprise. » To succeed with AI in CRM, he explained, organizations need a holistic strategy that ties AI across all departments and customer-facing channels. Using a channel-less approach, companies can avoid a disjointed user experience and very frustrated customers and instead take advantage of the full power of their data.

At the center of an experience platform where the AI brain lives, businesses should react in real time with chatbots, mobile apps, webpages, on the phone or in person at the store, Nicholson said. « This singular AI brain approach, » he noted, « allows [companies] to extend predictive intelligence to all other channels, without having to start from scratch for each new interface that comes along. »

Pega is ahead of the curve, he claimed, with the Customer Decision Hub, which serves as the central AI brain across all its CRM applications — from marketing to sales to customer service. « We’ve seen our clients leverage it to redefine how they engage with customers to turn their businesses around, » he reported, citing two examples: Royal Bank of Scotland raised its Net Promoter Score by 18 points across its 17 million customers, while Sprint overcame industry-high turnover rates and realized a 14% increase in customer retention.

SAP‘s Leonardo AI and machine learning tool can help companies with their digital transformation and customer engagement strategies. It also helps organizations address key technologies, including machine learning, internet of things, blockchain, big data and analytics. SAP Hybris follows an organic development approach to AI in CRM, using data scientists and development teams across all areas of the business. Find out more about Pega, Oracle, Salesforce and SAP AI systems in the following chart:

content_management-cohesive_ai_crm

AI strategy comes first, then AI tools second

For all the talk and focus on technological innovations that have disrupted and changed business processes, what has really changed the most during the technology revolution of the last 20 years is the customer.

Customers enter the buying process equipped with more information and perspective than ever before. From a bygone era of personal experiences and finite wells of word-of-mouth reviews, customers are now engaged with millions of other customer experiences through social media and online reviews, as well as unlimited resources, when making product or service comparisons. This paradigm shift has left marketers, sellers and service teams playing catch-up to develop strategies combined with technology to better equip themselves and capitalize on the customer’s experience.

Companies and brands hope that infusing a CRM AI strategy within their business will help balance the scales when interacting with customers. No business wants to enter a negotiation knowing less than its counterpart. And based on the marketing churn of most software companies, it’s easy to assume that many businesses have already implemented AI into their marketing and sales processes, and those that haven’t will be left in the dust.

« If the AI-driven environment can learn enough and be trained correctly, it can deliver better customers that are more relevant and timely and on the right device and right promotion, » Forrester Research principal analyst Joe Stanhope said. But AI in customer experience comes with a caveat. « It will play out as a multiyear process, and it’s not necessarily a technology problem, » Stanhope warned. « It’s more of a change of management and a cultural issue. »

Delivering on customer expectations

The importance of implementing an AI strategy into the customer experience isn’t lost on business executives. According to Bluewolf’s latest « State of Salesforce » annual report, 63% of C-level executives are counting on AI to improve the customer experience. A 2017 IBM study also indicated that 26% of respondents expect AI to have a significant impact on customer experience today, while 47% expect the impact to be within the next two or three years.

Chief marketing officers set sights on CRM AI

In the next two to three years, one-third of organizations plan to implement AI technologies, according to a 2017 study conducted by the IBM Institute for Business Value. Yet some organizations surveyed have already implemented AI technologies and intend to license more.

IBM‘s « Cognitive Catalysts: Reinventing Enterprises and Experiences With Artificial Intelligence » divided chief marketing officers into three groups of respondents:

  • Reinventors are AI-enabled with significant future investment,
  • Tacticians are AI-enabled with minimal future investment
  • and Aspirationals are planning their first AI-enabled investment.

In the next two years, 63% of reinventors, 48% of tacticians and 70% of aspirationals plan to implement AI technologies to help reinvent the customer experience, demonstrating that an AI implementation needs to start at the executive level and work its way down to the user base.

By then, there should be a substantial increase in use cases for AI customer service — not just in the product servicing sense, but also in the marketing and sales stages of the customer experience. « Buyers expect something different these days; they come in much more educated, » said Dana Hamerschlag, chief product officer at sales consultancy Miller Heiman Group. « The trick and challenge around AI is how do you leverage this powerful machine to tell you that process, rather than just give you the outcome data. »

The significance of gaining an edge on the customer extends to marketing, too, with a CRM AI strategy that can solve prospecting concerns. According to the Bluewolf’s annual report, 33% of marketing organizations that are increasing AI capabilities within the next year expect the technology to have the greatest impact on the ability to qualify prospects. « You need to enter a conversation with a customer understanding their context, » Hamerschlag advised. « You need to be informed and, with AI, not only [of] who they are but what they have looked at, what they are reading on my site, what emails they have opened. »

Technology based on strategy

The emphasis on customer experience has provided an outlet for AI’s potential. Companies are beginning to explore ways that a CRM AI strategy and the subsequent technologies can help improve customer service and experience.

Personalized photo books company Chatbooks Inc. helps customers convert photos on their phone or tablet into physical photo albums. It uses customer service reps to help customers complete the process and started implementing chatbots to streamline the customer service process. « It’s important that the customer service team is there when customers need them, » said Angel Brockbank, director of customer experience at Chatbooks, based in Provo, Utah.

The initial chatbot established by Chatbooks, created using Helpshift, a San Francisco-based customer service platform, helps customers create an account and input basic information like name and email. Brockbank said the company has an AI strategy in place and will be implementing another chatbot to help direct customer inquiries to the correct chat agent. « We haven’t done that yet, » she acknowledged, « but it will be helpful and useful for our team. »

This blending of product and experience has created an important need for AI technologies, according to Mika Yamamoto, chief digital marketing officer at SAP. « The technology is only as good as the strategy that goes with it, » Yamamoto said. « Companies have to understand how they want to show up for their customers and what type of customer engagement or experience they’re trying to enable. »

One of the impediments to implementing AI is employee adoption, according to a recent Forrester survey. Among CRM professionals, 28% said that one of the largest challenges to improving CRM last year was gaining user acceptance of new technologies, compared to 20% in 2015, a 40% increase. However, the CRM professionals thought it was easier working with IT to adopt new technologies last year (19%) than it was in 2015 (31%), a near 40% drop.

Still, the increased importance of the customer experience and knowing the customer is the main objective driving an AI strategy and the departmental changes that requires. In the Forrester survey, 64% of CRM professionals said creating a single view of customer data and information is the largest challenge they face when improving CRM capabilities, up from 47% in 2015.

BI0618_AI-impact

Click here to access TechTarget’s publication

Four elements that top performers include in their digital-strategy operating model

For many companies, the process of building and executing strategy in the digital age seems to generate more questions than answers. Despite digital’s dramatic effects on global business—the disruptions that have upended industries and the radically increasing speed at which business is done—the latest McKinsey Global Survey on the topic suggests that companies are making little progress in their efforts to digitalize the business model. Respondents who participated in this year’s and last year’s surveys report a roughly equal degree of digitalization as they did one year ago, suggesting that companies are getting stuck in their efforts to digitally transform their business.

The need for an agile digital strategy is clear, yet it eludes many—and there are plenty of pitfalls that we know result in failure. McKinsey has looked at how some companies are reinventing themselves in response to digital, not only to avoid failure but also to thrive.

In this survey, McKinsey explored which specific practices organizations must have in place to shape a winning strategy for digital—in essence, what the operating model looks like for a successful digital strategy of reinvention. Based on the responses, there are four areas of marked difference in how companies with the best economic performance approach digital strategy, compared with all others :

  • The best performers have increased the agility of their digital-strategy practices, which enables firstmover opportunities.

McK1

  • They have taken advantage of digital platforms to access broader ecosystems and to innovate new digital products and business models.

McK2

McK3

  • They have used M&A to build new digital capabilities and digital businesses.

McK4

  • They have invested ahead of their peers in digital talent.

McK5

Click here to access McKinsey’s survey results

Insurance Fraud Report 2019

Let’s start with some numbers. In this 2019 Insurance Fraud survey, loss ratios were 73% in the US. On average, 10% of the incurred losses were related to fraud, resulting in losses of $34 billion per year.

By actively fighting fraud we can improve these ratios and our customers’ experience. It’s time to take our anti-fraud efforts to a higher level. To effectively fight fraud, a company needs support and commitment throughout the organization, from top management to customer service. Detecting fraudulent claims is important. However, it can’t be the only priority. Insurance carriers must also focus on portfolio quality instead of quantity or volume.

It all comes down to profitable portfolio growth. Why should honest customers have to bear the risks brought in by others? In the end, our entire society suffers from fraud. We’re all paying higher premiums to cover for the dishonest. Things don’t change overnight, but an effective industry-wide fraud approach will result in healthy portfolios for insurers and fair insurance premiums for customers. You can call this honest insurance.

The Insurance Fraud Survey was conducted

  • to gain a better understanding of the current market state,
  • the challenges insurers must overcome
  • and the maturity level of the industry regarding insurance fraud.

This report is a follow up to the Insurance Fraud & Digital Transformation Survey published in 2016. Fraudsters are constantly innovating, so it is important to continuously monitor developments. Today you are reading the latest update on insurance fraud. For some topics the results of this survey are compared to those from the 2016 study.

This report explores global fraud trends in P&C insurance. This research addresses

  • challenges,
  • different approaches,
  • engagement,
  • priority,
  • maturity
  • and data sharing.

It provides insights for online presence, mobile apps, visual screening technology, telematics and predictive analytics.

Fraud-Fighting-Culture

Fraudsters are getting smarter in their attempts to stay under their insurer’s radar. They are often one step ahead of the fraud investigator. As a result, money flows to the wrong people. Of course, these fraudulent claims payments have a negative effect on loss ratio and insurance premiums. Therefore, regulators in many countries around the globe created anti-fraud plans and fraud awareness campaigns. Several industry associations have also issued guidelines and proposed preventive measures to help insurers and their customers.

Fraud1

Engagement between Departments

Fraud affects the entire industry, and fighting it pays off. US insurers say that fraud has climbed over 60% over the last three years. Meanwhile, the total savings of proven fraud cases exceeded $116 million. Insurers are seeing an increase in fraudulent cases and believe awareness and cooperation between departments is key to stopping this costly problem.

Fraud2

Weapons to Fight Fraud

Companies like Google, Spotify and Uber all deliver personalized products or services. Data is the engine of it all. The more you know, the better you can serve your customers. This also holds true for the insurance industry. Knowing your customer is very important, and with lots of data, insurers now know them even better. You’d think in today’s fast digital age, fighting fraud would be an automated task.

That’s not the case. Many companies still rely on their staff instead of automated fraud solutions. 67% of the survey respondents state that their company fights fraud based on the gut feeling of their claim adjusters. There is little or no change when compared to 2016.

Fraud3

Data, Data, Data …

In the fight against fraud, insurance carriers face numerous challenges – many related to data. Compared to the 2016 survey results, there have been minor, yet important developments. Regulations around privacy and security have become stricter and clearer.

The General Data Protection Regulation (GDPR) is only one example of centralized rules being pushed from a governmental level. Laws like this improve clarity on what data can be used, how it may be leveraged, and for what purposes.

Indicating risks or detecting fraud is difficult when the quality of internal data is subpar. However, it is also a growing pain when trying to enhance the customer experience. To improve customer experience, internal data needs to be accurate.

Fraud4

Benefits of Using Fraud Detection Software

Fighting fraud can be a time-consuming and error-prone process, especially when done manually. This approach is often based on the knowledge of claims adjustors. But what if that knowledge leaves the company? The influence of bias or prejudice when investigating fraud also comes into play.

With well-organized and automated risk analysis and fraud detection, the chances of fraudsters slipping into the portfolio are diminished significantly. This is the common belief among 42% of insurers. And applications can be processed even faster. Straightthrough processing or touchless claims handling improves customer experience, and thus customer satisfaction. The survey reported 61% of insurers currently work with fraud detection software to improve realtime fraud detection.

Fraud5

Click here to access FRISS’ detailed Report

Integrating Finance, Risk and Regulatory Reporting (FRR) through Comprehensive Data Management

Data travels faster than ever, anywhere and all the time. Yet as fast as it moves, it has barely been able to keep up with the expanding agendas of financial supervisors. You might not know it to look at them, but the authorities in Basel, Washington, London, Singapore and other financial and political centers are pretty swift themselves when it comes to devising new requirements for compiling and reporting data. They seem to want nothing less than a renaissance in the way institutions organize and manage their finance, risk and regulatory reporting activities.

The institutions themselves might want the same thing. Some of the business strategies and tactics that made good money for banks before the global financial crisis have become unsustainable and cut into their profitability. More stringent regulator frameworks imposed since the crisis require the implementation of complex, data-intensive stress testing procedures and forecasting models that call for unceasing monitoring and updating. The days of static reports capturing a moment in a firm’s life are gone. One of the most challenging data management burdens is rooted in duplication. The evolution of regulations has left banks with various bespoke databases across five core functions:

  • credit,
  • treasury,
  • profitability analytics,
  • financial reporting
  • and regulatory reporting,

with the same data inevitably appearing and processed in multiple places. This hodgepodge of bespoke marts simultaneously leads to both the duplication of data and processes, and the risk of inconsistencies – which tend to rear their head at inopportune moments (i.e. when consistent data needs to be presented to regulators). For example,

  • credit extracts core loan, customer and credit data;
  • treasury pulls core cash flow data from all instruments;
  • profitability departments pull the same instrument data as credit and treasury and add ledger information for allocations;
  • financial reporting pulls ledgers and some subledgers for reporting;
  • and regulatory reporting pulls the same data yet again to submit reports to regulators per prescribed templates.

The ever-growing list of considerations has compelled firms to revise, continually and on the fly, not just how they manage their data but how they manage their people and basic organizational structures. An effort to integrate activities and foster transparency – in particular through greater cooperation among risk and finance – has emerged across financial services. This often has been in response to demands from regulators, but some of the more enlightened leaders in the industry see it as the most sensible way to comply with supervisory mandates and respond to commercial exigencies, as well. Their ability to do that has been constrained by the variety, frequency and sheer quantity of information sought by regulators, boards and senior executives. But that is beginning to change as a result of new technological capabilities and, at least as important, new management strategies. This is where the convergence of Finance, Risk and Regulatory Reporting (FRR) comes in. The idea behind the FRR theme is that sound regulatory compliance and sound business analytics are manifestations of the same set of processes. Satisfying the demands of supervisory authorities and maximizing profitability and competitiveness in the marketplace involve similar types of analysis, modeling and forecasting. Each is best achieved, therefore, through a comprehensive, collaborative organizational structure that places the key functions of finance, risk and regulatory reporting at its heart.

The glue that binds this entity together and enables it to function as efficiently and cost effectively as possible – financially and in the demands placed on staff – is a similarly comprehensive and unified FRR data management. The right architecture will permit data to be drawn upon from all relevant sources across an organization, including disparate legacy hardware and software accumulated over the years in silos erected for different activities ad geographies. Such an approach will reconcile and integrate this data and present it in a common, consistent, transparent fashion, permitting it to be deployed in the most efficient way within each department and for every analytical and reporting need, internal and external.

The immense demands for data, and for a solution to manage it effectively, have served as a catalyst for a revolutionary development in data management: Regulatory Technology, or RegTech. The definition is somewhat flexible and tends to vary with the motivations of whoever is doing the defining, but RegTech basically is the application of cutting-edge hardware, software, design techniques and services to the idiosyncratic challenges related to financial reporting and compliance. The myriad advances that fall under the RegTech rubric, such as centralized FRR or RegTech data management and analysis, data mapping and data visualization, are helping financial institutions to get out in front of the stringent reporting requirements at last and accomplish their efforts to integrate finance, risk and regulatory reporting duties more fully, easily and creatively.

A note of caution though: While new technologies and new thinking about how to employ them will present opportunities to eliminate weaknesses that are likely to have crept into the current architecture, ferreting out those shortcomings may be tricky because some of them will be so ingrained and pervasive as to be barely recognizable. But it will have to be done to make the most of the systems intended to improve or replace existing ones.

Just what a solution should encompass to enable firms to meet their data management objectives depends on the

  • specifics of its business, including its size and product lines,
  • the jurisdictions in which it operates,
  • its IT budget
  • and the tech it has in place already.

But it should accomplish three main goals:

  1. Improving data lineage by establishing a trail for each piece of information at any stage of processing
  2. Providing a user-friendly view of the different processing step to foster transparency
  3. Working together seamlessly with legacy systems so that implementation takes less time and money and imposes less of a burden on employees.

The two great trends in financial supervision – the rapid rise in data management and reporting requirements, and the demands for greater organizational integration – can be attributed to a single culprit: the lingering silo structure. Fragmentation continues to be supported by such factors as a failure to integrate the systems of component businesses after a merger and the tendency of some firms to find it more sensible, even if it may be more costly and less efficient in the long run, to install new hardware and software whenever a new set of rules comes along. That makes regulators – the people pressing institutions to break down silos in the first place – inadvertently responsible for erecting new barriers.

This bunker mentality – an entrenched system of entrenchment – made it impossible to recognize the massive buildup of credit difficulties that resulted in the global crisis. It took a series of interrelated events to spark the wave of losses and insolvencies that all but brought down the financial system. Each of them might have appeared benign or perhaps ominous but containable when taken individually, and so the occupants of each silo, who could only see a limited number of the warning signs, were oblivious to the extent of the danger. More than a decade has passed since the crisis began, and many new supervisory regimens have been introduced in its aftermath. Yet bankers, regulators and lawmakers still feel the need, with justification, to press institutions to implement greater organizational integration to try to forestall the next meltdown. That shows how deeply embedded the silo system is in the industry.

Data requirements for the development that, knock on wood, will limit the damage from the next crisis – determining what will happen, rather than identifying and explaining what has already happened – are enormous. The same goes for running an institution in a more integrated way. It’s not just more data that’s needed, but more kinds of data and more reliable data. A holistic, coordinated organizational structure, moreover, demands that data be analyzed at a higher level to reconcile the massive quantities and types of information produced within each department. And institutions must do more than compile and sort through all that data. They have to report it to authorities – often quarterly or monthly, sometimes daily and always when something is flagged that could become a problem. Indeed, some data needs to be reported in real time. That is a nearly impossible task for a firm still dominated by silos and highlights the need for genuinely new design and implementation methods that facilitate the seamless integration of finance, risk and regulatory reporting functions. Among the more data-intensive regulatory frameworks introduced or enhanced in recent years are:

  • IFRS 9 Financial Instruments and Current Expected Credit Loss. The respective protocols of the International Accounting Standards Board and Financial Accounting Standards Board may provide the best examples of the forwardthinking approach – and rigorous reporting, data management and compliance procedures – being demanded. The standards call for firms to forecast credit impairments to assets on their books in near real time. The incurred-loss model being replaced merely had banks present bad news after the fact. The number of variables required to make useful forecasts, plus the need for perpetually running estimates that hardly allow a chance to take a breath, make the standards some of the most data-heavy exercises of all.
  • Stress tests here, there and everywhere. Whether for the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) for banks operating in the United States, the Firm Data Submission Framework (FDSF) in Britain or Asset Quality Reviews, the version conducted by the European Banking Authority (EBA) for institutions in the euro zone, stress testing has become more frequent and more free-form, too, with firms encouraged to create stress scenarios they believe fit their risk profiles and the characteristics of their markets. Indeed, the EBA is implementing a policy calling on banks to conduct stress tests as an ongoing risk management procedure and not merely an assessment of conditions at certain discrete moments.
  • Dodd-Frank Wall Street Reform and Consumer Protection Act. The American law expands stress testing to smaller institutions that escape the CCAR. The act also features extensive compliance and reporting procedures for swaps and other over-the-counter derivative contracts.
  • European Market Infrastructure Regulation. Although less broad in scope than Dodd-Frank, EMIR has similar reporting requirements for European institutions regarding OTC derivatives.
  • AnaCredit, Becris and FR Y-14. The European Central Bank project, known formally as the Analytical Credit Dataset, and its Federal Reserve equivalent for American banks, respectively, introduce a step change in the amount and granularity of data that needs to be reported. Information on loans and counterparties must be reported contract by contract under AnaCredit, for example. Adding to the complication and the data demands, the European framework permits national variations, including some with particularly rigorous requirements, such as the Belgian Extended Credit Risk Information System (Becris).
  • MAS 610. The core set of returns that banks file to the Monetary Authority of Singapore are being revised to require information at a far more granular level beginning next year. The number of data elements that firms have to report will rise from about 4,000 to about 300,000.
  • Economic and Financial Statistics Review (EFS). The Australian Prudential Authority’s EFS Review constitutes a wide-ranging update to the regulator’s statistical data collection demands. The sweeping changes include requests for more granular data and new forms in what would be a three-phase implementation spanning two years, requiring parallel and trial periods running through 2019 and beyond.

All of those authorities, all over the world, requiring that much more information present a daunting challenge, but they aren’t the only ones demanding that finance, risk and regulatory reporting staffs raise their games. Boards, senior executives and the real bosses – shareholders – have more stringent requirements of their own for profitability, capital efficiency, safety and competitiveness. Firms need to develop more effective data management and analysis in this cause, too.

The critical role of data management was emphasized and codified in Document 239 of the Basel Committee on Banking Supervision (BCBS), “Principles for Effective Risk Data Aggregation and Risk Reporting.” PERDARR, as it has come to be called in the industry, assigns data management a central position in the global supervisory architecture, and the influence of the 2013 paper can be seen in mandates far and wide. BCBS 239 explicitly linked a bank’s ability to gauge and manage risk with its ability to function as an integrated, cooperative unit rather than a collection of semiautonomous fiefdoms. The process of managing and reporting data, the document makes clear, enforces the link and binds holistic risk assessment to holistic operating practices. The Basel committee’s chief aim was to make sure that institutions got the big picture of their risk profile so as to reveal unhealthy concentrations of exposure that might be obscured by focusing on risk segment by segment. Just in case that idea might escape some executive’s notice, the document mentions the word “aggregate,” in one form or another, 86 times in the 89 ideas, observations, rules and principles it sets forth.

The importance of aggregating risks, and having data management and reporting capabilities that allow firms to do it, is spelled out in the first of these: ‘One of the most significant lessons learned from the global financial crisis that began in 2007 was that banks’ information technology (IT) and data architectures were inadequate to support the broad management of financial risks. Many banks lacked the ability to aggregate risk exposures and identify concentrations quickly and accurately at the bank group level, across business lines and between legal entities. Some banks were unable to manage their risks properly because of weak risk data aggregation capabilities and risk reporting practices. This had severe consequences to the banks themselves and to the stability of the financial system as a whole.’

If risk data management was an idea whose time had come when BCBS 239 was published five years ago, then RegTech should have been the means to implement the idea. RegTech was being touted even then, or soon after, as a set of solutions that would allow banks to increase the quantity and quality of the data they generate, in part because RegTech itself was quantitatively and qualitatively ahead of the hardware and software with which the industry had been making do. There was just one ironic problem: Many of the RegTech solutions on the market at the time were highly specialized and localized products and services from small providers. That encouraged financial institutions to approach data management deficiencies gap by gap, project by project, perpetuating the compartmentalized, siloed thinking that was the scourge of regulators and banks alike after the global crisis. The one-problem-at-a-time approach also displayed to full effect another deficiency of silos: a tendency for work to be duplicated, with several departments each producing the same information, often in different ways and with different results. That is expensive and time consuming, of course, and the inconsistencies that are likely to crop up make the data untrustworthy for regulators and for executives within the firm that are counting on it.

Probably the most critical feature of a well thought-out solution is a dedicated, focused and central FRR data warehouse that can chisel away at the barriers between functions, even at institutions that have been slow to abandon a siloed organizational structure reinforced with legacy systems.

FRR

With :

  • E : Extract
  • L : Load
  • T : Transform Structures
  • C : Calculations
  • A : Aggregation
  • P : Presentation

 

Click here to access Wolters Kluwer’s White Paper

 

 

Perspectives on the next wave of cyber

Financial institutions are acutely aware that cyber risk is one of the most significant perils they face and one of the most challenging to manage. The perceived intensity of the threats, and Board level concern about the effectiveness of defensive measures, ramp up continually as bad actors increase the sophistication, number, and frequency of their attacks.

Cyber risk management is high on or at the top of the agenda for financial institutions across the sector globally. Highly visible attacks of increasing insidiousness and sophistication are headline news on an almost daily basis. The line between criminal and political bad actors is increasingly blurred with each faction learning from the other. In addition, with cyberattack tools and techniques becoming more available via the dark web and other sources, the population of attackers continues to increase, with recent estimates putting the number of cyberattackers globally in the hundreds of thousands.

Cyber offenses against banks, clearers, insurers, and other major financial services sector participants will not abate any time soon. Looking at the velocity and frequency of attacks, the motivation for cyberattack upon financial services institutions can be several hundred times higher than for non-financial services organizations.

Observing these developments, regulators are prescribing increasingly stringent requirements for cyber risk management. New and emerging regulation will force changes on many fronts and will compel firms to demonstrate that they are taking cyber seriously in all that they do. However, compliance with these regulations will only be one step towards assuring effective governance and control of institutions’ Cyber Risk.

We explore the underlying challenges with regard to cyber risk management and analyze the nature of increasingly stringent regulatory demands. Putting these pieces together, we frame five strategic moves which we believe will enable businesses to satisfy business needs, their fiduciary responsibilities with regard to cyber risk, and regulatory requirements:

  1. Seek to quantify cyber risk in terms of capital and earnings at risk.
  2. Anchor all cyber risk governance through risk appetite.
  3. Ensure effectiveness of independent cyber risk oversight using specialized skills.
  4. Comprehensively map and test controls, especially for third-party interactions.
  5. Develop and exercise major incident management playbooks.

These points are consistent with global trends for cyber risk management. Further, we believe that our observations on industry challenges and the steps we recommend to address them are applicable across geographies, especially when considering prioritization of cyber risk investments.

FIVE STRATEGIC MOVES

The current environment poses major challenges for Boards and management. Leadership has to fully understand the cyber risk profile the organization faces to simultaneously protect the institution against everchanging threats and be on the front foot with regard to increasing regulatory pressures, while prioritizing the deployment of scarce resources. This is especially important given that regulation is still maturing and it is not yet clear how high the compliance bars will be set and what resources will need to be committed to achieve passing grades.

With this in mind, we propose five strategic moves which we believe, based on our experience, will help institutions position themselves well to address existing cyber risk management challenges.

1) Seek to quantify cyber risk in terms of capital and earnings at risk

Boards of Directors and all levels of management intuitively relate to risks that are quantified in economic terms. Explaining any type of risk, opportunity, or tradeoff relative to the bottom line brings sharper focus to the debate.

For all financial and many non-financial risks, institutions have developed methods for quantifying expected and unexpected losses in dollar terms that can readily be compared to earnings and capital. Further, regulators have expected this as a component of regulatory and economic capital, CCAR, and/or resolution and recovery planning. Predicting losses due to Cyber is particularly difficult because it consists of a combination of direct, indirect, and reputational elements which are not easy to quantify. In addition, there is limited historical cyber loss exposure data available to support robust cyber risk quantification.

Nevertheless, institutions still need to develop a view of their financial exposures of cyber risk with different levels of confidence and understand how this varies by business line, process, or platform. In some cases, these views may be more expert based, using scenario analysis approaches as opposed to raw statistical modeling outputs. The objectives are still the same – to challenge perspectives as to

  • how much risk exposure exists,
  • how it could manifest within the organization,
  • and how specific response strategies are reducing the institution’s inherent cyber risk.

2) Anchor all cyber risk governance through risk appetite

Regulators are specifically insisting on the establishment of a cyber risk strategy, which is typically shaped by a cyber risk appetite. This should represent an effective governance anchor to help address the Board’s concerns about whether appropriate risks are being considered and managed effectively.

Setting a risk appetite enables the Board and senior management to more deeply understand exposure to specific cyber risks, establish clarity on the Cyber imperatives for the organization, work out tradeoffs, and determine priorities.

Considering cyber risk in this way also enables it to be brought into a common framework with all other risks and provides a starting point to discuss whether the exposure is affordable (given capital and earnings) and strategically acceptable.

Cyber risk appetite should be cascaded down through the organization and provide a coherent management and monitoring framework consisting of

  • metrics,
  • assessments,
  • and practical tests or exercises

at multiple levels of granularity. Such cascading establishes a relatable chain of information at each management level across business lines and functions. Each management layer can hold the next layer more specifically accountable. Parallel business units and operations can have common standards for comparing results and sharing best practices.

Finally, Second and Third Line can have focal points to review and assure compliance. A risk appetite chain further provides a means for the attestation of the effectiveness of controls and adherence to governance directives and standards.

Where it can be demonstrated that risk appetite is being upheld to procedural levels, management will be more confident in providing the attestations that regulators require.

cyber1

3) Ensure effectiveness of independent cyber risk oversight using specialized skills

From our perspective, firms face challenges when attempting to practically fit cyber risk management into a “Three Lines of Defense” model and align cyber risk holistically within an enterprise risk management framework.

CROs and risk management functions have traditionally developed specialized skills for many risk types, but often have not evolved as much depth on IT and cyber risks. Organizations have overcome this challenge by weaving risk management into the IT organization as a First Line function.

In order to more clearly segregate the roles between IT, business, and Information Security (IS), the Chief Information Security Officer (CISO) and the IS team will typically need to be positioned as a « 1.5 Line of Defense » position. This allows an Information Security group to provide more formal oversight and guidance on the cyber requirements and to monitor day-today compliance across business and technology teams.

Further independent risk oversight and audit is clearly needed as part of the Third Line of Defense. Defining what oversight and audit means becomes more traceable and tractable when specific governance mandates and metrics from the Board down are established.

Institutions will also need to deal with the practical challenge of building and maintaining Cyber talent that can understand the business imperatives, compliance requirements, and associated cyber risk exposures.

At the leadership level, some organizations have introduced the concept of a Risk Technology Officer who interfaces with the CISO and is responsible for integration of cyber risk with operational risk.

4) Comprehensively map and test controls, especially for the third party interactions

Institutions need to undertake more rigorous and more frequent assessments of cyber risks across operations, technology, and people. These assessments need to test

  • the efficacy of surveillance,
  • the effectiveness of protection and defensive controls,
  • the responsiveness of the organization,
  • and the ability to recover

in a manner consistent with expectations of the Board.

Given the new and emerging regulatory requirements, firms will need to pay closer attention to the ongoing assessment and management of third parties. Third parties need to be tiered based on their access and interaction with the institution’s high value assets. Through this assessment of process, institutions need to obtain a more practical understanding of their ability to get early warning signals against cyber threats. In a number of cases, a firm may choose to outsource more IT or data services to third party providers (e.g., Cloud) where they consider that this option represents a more attractive and acceptable solution relative to the cost or talent demands associated with maintaining Information Security in-house for certain capabilities. At the same time, the risk of third party compromise needs to be fully understood with respect to the overall risk appetite.

cyber3

5) Develop and exercise incident management playbooks

A critical test of an institution’s cyber risk readiness is its ability to quickly and effectively respond when a cyberattack occurs.

As part of raising the bar on cyber resilience, institutions need to ensure that they have clearly documented and proven cyber incident response plans that include

  • a comprehensive array of attack scenarios,
  • clear identification of accountabilities across the organization,
  • response strategies,
  • and associated internal and external communication scenarios.

Institutions need to thoroughly test their incident response plan on an ongoing basis via table top exercises and practical drills. As part of a table top exercise, key stakeholders walk through specific attack scenarios to test their knowledge of response strategies. This exercise provides an avenue for exposing key stakeholders to more tangible aspects of cyber risk and their respective roles in the event of a cyberattack. It also can reveal gaps in specific response processes, roles, and communications that the institution will need to address.

Last but not least, incident management plans need to be reviewed and refined based on changes in the overall threat landscape and an assessment of the institution’s cyber threat profile; on a yearly or more frequent basis depending on the nature and volatility of the risk for a given business line or platform.

CONCLUSION

Cyber adversaries are increasingly sophisticated, innovative, organized, and relentless in developing new and nefarious ways to attack institutions. Cyber risk represents a relatively new class of risk which brings with it the need to grasp the often complex technological aspects, social engineering factors, and changing nature of Operational Risk as a consequence of cyber.

Leadership has to understand the threat landscape and be fully prepared to address the associated challenges. It would be impractical to have zero tolerance to cyber risk, so institutions will need to determine their risk appetite with regard to cyber, and consequently, make direct governance, investment, and operational design decisions.

The new and emerging regulations are a clear directive to financial institutions to keep cyber risk at the center of their enterprise-wide business strategy, raising the overall bar for cyber resilience. The associated directives and requirements across the many regulatory bodies represent a good and often strong basis for cyber management practices but each institution will need to further ensure that they are tackling cyber risk in a manner fully aligned with the risk management strategy and principles of their firm. In this context, we believe the five moves represent multiple strategically important advances almost all financial services firms will need to make to meet business security, resiliency, and regulatory requirements.

cyber2

click here to access mmc’s cyber handbook