Actualités

EIOPA’s Insurance Stress Test 2018 Recommendations

Introduction

During the course of 2018, EIOPA carried out a European-wide stress test (ST) in accordance with Articles 21(2)(b) and 32 of Regulation (EU) 1094/2010 of 24 November 2010 of the European Parliament and of the Council (hereafter the ‘Regulation’).

The Recommendations contained in this document are issued in accordance with Article 21(2)(b) of the Regulation in order to address issues identified in the stress test.

EIOPA will support National Competent Authorities (NCAs) and undertakings through guidance and other measures if needed.

The 2018 Stress Test results showed that on aggregate the insurance sector is sufficiently capitalised to absorb the combination of shocks prescribed in the three scenarios. However, it also confirms the significant sensitivity to market shocks for the European insurance sector with Groups being vulnerable

  • not only to low yields and longevity risk,
  • but also to a sudden and abrupt reversal of risk premia, combined with an instantaneous shock to lapse rates and claims inflation.

The exercise further reveals potential transmission channels of the tested shocks to insurers’ balance sheets. For instance, in the YCU scenario the assumed claim inflation shock leads to a net increase in the liabilities of those Groups more exposed to non-life business through claims inflation. Finally, both the YCD and YCU scenario have similar negative impact on post-stress SCR ratios.

As outlined in the Executive Summary of the 2018 Insurance Stress Test Report, further analyses of the results are required by EIOPA and the NCAs to obtain a deeper understanding of the risks and vulnerabilities of the sector.

In order to follow-up on the main vulnerabilities, EIOPA is issuing the present Recommendations related to the 2018 stress test exercise.

Recommendation 1
NCAs should strengthen the supervision of the Groups identified as facing greater exposure to Yield Curve Up and/or Yield Curve Down scenarios. This affects, in particular, those Groups where transitional measures have a greater impact.

Recommendation 2
NCAs should carefully review and, where necessary, challenge the capital and risk management strategies of the affected Groups. In particular:

  • NCAs should require Groups to clarify the impact of the stress test in terms of capital and risk management.
  • For the affected Groups, stress test scenarios similar to YCU and YCD should be properly considered in the risk management framework, including the ORSAs.
  • Review the risk appetite framework for the affected Groups.

Recommendation 3
NCAs should evaluate the potential management actions to be implemented by the affected Groups. In particular:

  • NCAs should require Groups to indicate the range of actions based on the results of the stress testing.
  • NCAs should assess if the actions identified are realistic in such stress scenarios.
  • NCAs should consider any eventual second-round effects.

Recommendation 4
NCAs should further contribute to enhance the stress test process.

Recommendation 5
NCAs should enhance cooperation and information exchange with other relevant Authorities, such as the ECB/SSM or other national authorities, concerning the stress test results of the affected insurers which form part of a financial conglomerate.

EIOPA ST

Click here to access EIOPA’s Recommendations

Mastering Risk with “Data-Driven GRC”

Overview

The world is changing. The emerging risk landscape in almost every industry vertical has changed. Effective methodologies for managing risk have changed (whatever your perspective:

  • internal audit,
  • external audit/consulting,
  • compliance,
  • enterprise risk management,

or otherwise). Finally, technology itself has changed, and technology consumers expect to realize more value, from technology that is more approachable, at lower cost.

How are these factors driving change in organizations?:

Emerging Risk Landscapes

Risk has the attention of top executives. Risk shifts quickly in an economy where “speed of change” is the true currency of business, and it emerges in entirely new forms in a world where globalization and automation are forcing shifts in the core values and initiatives of global enterprises.

Evolving Governance, Risk, and Compliance Methodologies

Across risk and control oriented functions spanning a variety of audit functions, fraud, compliance, quality management, enterprise risk management, financial control, and many more, global organizations are acknowledging a need to provide more risk coverage at lower cost (measured in both time and currency), which is driving re-inventions of methodology and automation.

Empowerment Through Technology

Gartner, the leading analyst firm in the enterprise IT space, is very clear that the convergence of four forces—Cloud, Mobile, Data, and Social—is driving the empowerment of individuals as they interact with each other and their information through well-designed technology.

In most organizations, there is no coordinated effort to leverage organizational changes emerging from these three factors in order to develop an integrated approach to mastering risk management. The emerging opportunity is to leverage the change that is occurring, to develop new programs; not just for technology, of course, but also for the critical people, methodology, and process issues. The goal is to provide senior management with a comprehensive and dynamic view of the effectiveness of how an organization is managing risk and embracing change, set in the context of overall strategic and operational objectives.

Where are organizations heading?

“Data Driven GRC” represents a consolidation of methodologies, both functional and technological, that dramatically enhance the opportunity to address emerging risk landscapes and, in turn, maximizing the reliability of organizational performance.

This paper examines the key opportunities to leverage change—both from a risk and an organizational performance management perspective—to build integrated, data-driven GRC processes that optimize the value of audit and risk management activities, as well as the investments in supporting tools and techniques.

Functional Stakeholders of GRC Processes and Technology

The Institute of Internal Auditors’ (IIA) “Three Lines of Defense in Effective Risk Management and Control” model specifically addresses the “who and what” of risk management and control. It distinguishes and describes three role- and responsibility-driven functions:

  • Those that own and manage risks (management – the “first line”)
  • Those that oversee risks (risk, compliance, financial controls, IT – the “second line”)
  • Those functions that provide independent assurance over risks (internal audit – the “third line”)

The overarching context of these three lines acknowledges the broader role of organizational governance and governing bodies.

IIAA

Technology Solutions

Data-Driven GRC is not achievable without a technology platform that supports the steps illustrated above, and integrates directly with the organization’s broader technology environment to acquire the data needed to objectively assess and drive GRC activities.

From a technology perspective, there are four main components required to enable the major steps in Data-Driven GRC methodology:

1. Integrated Risk Assessment

Integrated risk assessment technology maintains the inventory of strategic risks and the assessment of how well they are managed. As the interface of the organization’s most senior professionals into GRC processes, it must be a tool relevant to and usable by executive management. This technology sets the priorities for risk mitigation efforts, thereby driving the development of project plans crafted by each of the functions in the different lines of defense.

2. Project & Controls Management

A project and controls management system (often referred to more narrowly as audit management systems or eGRC systems) enables the establishment of project plans in each risk and control function that map against the risk mitigation efforts identified as required. Projects can then be broken down into actionable sets of tactical level risks, controls that mitigate those risks, and tests that assess those controls.

This becomes the backbone of the organization’s internal control environment and related documentation and evaluation, all setting context for what data is actually required to be tested or monitored in order to meet the organization’s strategic objectives.

3. Risk & Control Analytics

If you think of Integrated Risk Assessment as the brain of the Data-Driven GRC program and the Project & Controls Management component as the backbone, then Risk & Control Analytics are the heart and lungs.

An analytic toolset is critical to reaching out into the organizational environment and acquiring all of the inputs (data) that are required to be aggregated, filtered, and processed in order to route back to the brain for objective decision making. It is important that this toolset be specifically geared toward risk and control analytics so that the filtering and processing functionality is optimized for identifying anomalies representing individual occurrences of risk, while being able to cope with huge populations of data and illustrate trends over time.

4. Knowledge Content

Supporting all of the technology components, knowledge content comes in many forms and provides the specialized knowledge of risks, controls, tests, and data required to perform and automate the methodology across a wide-range of organizational risk areas.

Knowledge content should be acquired in support of individual risk and control objectives and may include items such as:

  • Risk and control templates for addressing specific business processes, problems, or high-level risk areas
  • Integrated compliance frameworks that balance multiple compliance requirements into a single set of implemented and tested controls
  • Data extractors that access specific key corporate systems and extract data sets required for evaluation (e.g., an SAP supported organization may need an extractor that pulls a complete set of fixed asset data from their specific version of SAP that may be used to run all require tests of controls related to fixed assets)
  • Data analysis rule sets (or analytic scripts) that take a specific data set and evaluate what transactions in the data set violate the rules, indicating control failures occurred

Mapping these key technology pieces that make up an integrated risk and control technology platform against the completely integrated Data-Driven GRC methodology looks as follows:

DDGRC

When evaluating technology platforms, it is imperative that each piece of this puzzle directly integrates with the other; otherwise, manual aggregation of results will be required, which is not only laborious but also inconsistent, disorganized and (by definition) violates the Data-Driven GRC methodology.

HiPerfGRC

 

Click here to access ACL’s study

The evolution of GRC

Attitudes to governance, risk and compliance (GRC) activities are changing among Tier 1 financial institutions. The need to keep up with rapid regulatory change, and the pressure of larger, more publicised penalties dealt out by regulators in recent years have prompted an evolution in how risk is viewed and managed. Financial firms also face an increasingly volatile market environment that requires them to remain nimble – not just to survive, but to thrive.

As a result of these market developments, GRC is now seen, rather than as one strand of the business, as a far more integrated activity with many companies realigning resources around the ‘three lines of defence’ model. GRC is increasingly being treated as an enterprise-wide responsibility by organisations that are successfully navigating these challenging times for global financial markets. This shift in attitudes is also leading to a rethink in relation to the tools used by all three lines of defence to participate in GRC activities. Some are exploring more innovative solutions to support and engage infrequent users – particularly those in the first line of defence (1LoD). The more intuitive design of such tools enables these users to take a more active role in risk-aware decision-making.

These and other innovations promise to bring greater effectiveness and efficiency to an area into which firms have channelled increasing levels of resource in recent years but are struggling to keep up with demand. A recent survey carried out by Risk.net and IBM found that risk and compliance professionals acknowledge the limitations of existing operational risk and regulatory compliance tools and systems to satisfy current and future GRC requirements. The survey polled 106 senior risk, compliance, audit and legal executives at financial firms including banks (53%), insurance companies (21%) and asset management firms (12%). The results revealed that nearly one third of these respondents remain unimpressed with the effectiveness of their organisation’s ability to cope with the complexity and pace of regulatory change. Nearly half gave a similar response regarding their organisation’s efficiency in this area.

With these issues in mind, many of the firms surveyed have started to explore user-experience needs more deeply and combine the results with artificial intelligence (AI) capabilities to further develop GRC systems and processes. These capabilities are designed to enhance compliance systems and processes and make them more intuitive for all. As such, user-experience research and design has become a key consideration for organisations wanting to ensure employees across all three lines of defence can participate more fully in GRC activities. In addition, AI-powered tools can help 1LoD business users better manage risk and ensure compliance by increasing the efficiency and effectiveness of these GRC systems and processes. The survey shows that, while some organisations are already developing these types of solutions, there is still room for greater understanding of the benefits of new and innovative forms of technology throughout the global financial markets. For instance, nearly half of respondents to the survey, when asked about the benefits of AI for GRC activities, were unsure of the potential time efficiencies such tools can bring. More than one-quarter were undecided on whether AI would free up employees’ time to focus on more strategic tasks.

Many organisations are still considering how to move forward in this area, but it will be those that truly embrace user-focused tools and leverage innovative technologies such as AI and advanced analytics to increase efficiencies that can expect to reap the rewards of successfully managing regulatory change and tackling market volatility.

LoD

Current and Future Applications

The survey highlights that financial firms already recognise that these solutions can be used to more efficiently manage the regulatory change process. For example, AI-based solutions can provide smart alerts to highlight the most relevant regulatory changes – 35% of survey respondents see AI as offering the biggest potential improvements in this area.

Improving the speed and accuracy of classification and reporting of information – for example, in relation to loss events – was another area identified for its high AI potential. Nearly one-third of respondents (31%) see possibilities for improvement of current GRC processes in this area. Some financial firms have already started to reap the rewards of this type of approach. Larger firms are typically ahead of the game with such developments, often having more resources to put into research and development. Out of the 13% of larger firms that have seen a decrease in GRC resources over the past year, one-third of survey respondents attribute that to “tools and automation improvements”.

Similarly, 44% of those polled work at organisations already making improvements to improve end-to-end time and user experience in relation to GRC processes and tools. A further 19% plan to do this in the next 12 months and, in line with this, 64% of survey respondents expect their firm’s GRC resources to increase over the next 24 months (see figure 8). While it is not clear from the survey whether these additional resources will be specifically directed towards AI, more than 80% of respondents work at organisations currently considering AI for a range of GRC activities.

The most popular use of AI among financial firms is to improve the speed and/or accuracy of classification and reporting information, such as loss events – 19% of respondents say their organisation is currently using AI for this purpose, with 81% currently considering this type of use. Such events happen fairly infrequently, so training employees to classify and enter such information can be time consuming, but incorrect classification can have a real impact on data quality. By using natural language processing (NLP) tools to understand and categorise loss events automatically, organisations can streamline the time and resources required to train employees to collect and manage this information.

According to the survey, 83% of respondents are also currently considering the use of AI tools to develop smart alerts that will highlight any new rules or updates to existing regulations, helping financial firms manage regulatory change more efficiently. Many organisations already receive an overwhelming amount of alerts every day relating to new rules or changes, but some or all of these changes may not actually apply to their businesses. AI can be used to tailor these alerts to ensure compliance teams only receive the most relevant alerts. Using NLP to create this mechanism can be the difference between sorting through 100 alerts in one day and receiving one smart alert that has been identified by an AI-powered solution.

Control mapping is another area to which AI can add value. When putting controls in place relating to specific obligations within a regulation, for example, compliance teams can either create a new control or, using NLP, detect whether there is already an applicable control in place that can be mapped to record the organisation’s compliance with the rule. This reduces the amount of time spent by the team reading and understanding new legislation or rule changes to determine applicability, as well as improving accuracy and reducing duplicate controls.

Click here to access IBM’s White Paper

EIOPA’s Supervisory Statement Solvency II: Application of the proportionality principle in the supervision of the Solvency Capital Requirement

EIOPA identified potential divergences in the supervisory practices concerning the supervision of the SCR calculation of immaterial sub-modules.

EIOPA agrees that in case of immaterial SCR sub-modules the principle of proportionality applies regarding the supervisory review process, but considers it is important to guarantee supervisory convergence as divergent approaches could lead to supervisory arbitrage.

EIOPA is of the view that the consistent implementation of the proportionality principle is a key element to ensure supervisory convergence for the supervision of the SCR. For this purpose the following key areas should be considered:

Proportionate approach

Supervisory authorities may allow undertakings, when calculating the SCR at the individual undertaking level, to adopt a proportionate approach towards immaterial SCR sub-modules, provided that the undertaking concerned is able to demonstrate to the satisfaction of the supervisory authorities that:

  1. the amount of the SCR sub-module is immaterial when compared with the total basic SCR (BSCR);
  2. applying a proportionate approach is justifiable taking into account the nature and complexity of the risk;
  3. the pattern of the SCR sub-module is stable over the last three years;
  4. such amount/pattern is consistent with the business model and the business strategy for the following years; and
  5. undertakings have in place a risk management system and processes to monitor any evolution of the risk, either triggered by internal sources or by an external source that could affect the materiality of a certain submodule.

This approach should not be used when calculating SCR at group level.

An SCR sub-module should be considered immaterial for the purposes of the SCR calculation when its amount is not relevant for the decision-making process or the judgement of the undertaking itself or the supervisory authorities. Following this principle, even if materiality needs to be assessed on a case-by-case basis, EIOPA recommends that materiality is assessed considering the weight of the sub-modules in the total BSCR and

  • that each sub-module subject to this approach should not represent more than 5% of the BSCR
  • or all sub-modules should not represent more than 10% of the BSCR.

For immaterial SCR sub-modules supervisory authorities may allow undertakings not to perform a full recalculation of such a sub-module on a yearly basis taking into consideration the complexity and burden that such a calculation would represent when compared to the result of the calculation.

Prudent calculation

For the sub-modules identified as immaterial, a calculation of the SCR submodule using inputs prudently estimated and leading to prudent outcomes should be performed at the time of the decision to adopt a proportionate approach. Such calculation should be subject to the consent of the supervisory authority.

The result of such a calculation may then be used in principle for the next three years, after which a full calculation using inputs prudently estimated is required so that the immateriality of the sub-module and the risk-based and proportionate approach is re-assessed.

During the three-year period the key function holder of the actuarial function should express an opinion to the administrative, management or supervisory body of the undertaking on the outcome of immaterial sub-module used for calculating SCR.

Risk management system and ORSA

Such a system should be proportionate to the risks at stake while ensuring a proper monitoring of any evolution of the risk, either triggered by internal sources such as a change in the business model or business strategy or by an external source such as an exceptional event that could affect the materiality of a certain sub-module.

Such a monitoring should include the setting of qualitative and quantitative early warning indicators (EWI), to be defined by the undertaking and embedded in the ORSA processes.

Supervisory reporting and public disclosure

Undertakings should include information on the risk management system in the ORSA Report. Undertakings should include structured information on the sub-modules for which a proportionate approach is applied in the Regular Supervisory Reporting and in the Solvency and Financial Condition Report (SFCR), under the section “E.2 Capital Management – Solvency Capital Requirement and Minimum Capital Requirement”.

Supervisory review process

The approach should be implemented in the context of on-going supervisory dialogue, meaning that the supervisory authority should be satisfied and agree with the approach taken and be kept informed in case of any material change. Supervisory authorities should inform the undertakings in case there is any concern with the approach. In case the supervisory authority has any concern the approach should not be implemented or might be implemented with additional safeguards as agreed between the supervisory authority and the undertaking.

In some situations supervisory authorities may require a full calculation following the requirements of the Delegated Regulation and using inputs prudently estimated.

Example : Supervisory reporting and public disclosure

Undertakings should include information on the risk management system referred to in the previous paragraphs in the ORSA Report.

Undertakings should include structured information on the sub-modules for which a proportionate approach is applied in the Regular Supervisory Reporting, under the section “E.2 Capital Management – Solvency Capital Requirement and Minimum Capital Requirement” (RSR), including at least the following information:

  1. identification of the sub-module(s) for which a proportionate approach was applied;
  2. amount of the SCR for such a sub-module in the last three years before the application of proportionate approach, including the current year;
  3. the date of the last calculation performed following the requirements of the Delegated Regulation using inputs prudently estimated; and
  4. early warning indicators identified and triggers for a calculation following the requirements of the Delegated Regulation and using inputs prudently estimated.

Undertakings should also include structured information on the sub-modules for which a proportionate approach is applied in the Solvency and Financial Condition Report, under the section “E.2 Capital Management – Solvency Capital Requirement and Minimum Capital Requirement” (SFCR), including at least the identification of the submodule(s) for which a proportionate calculation was applied.

An example of structured information to be included in the regular supervisory report in line with Article 311(6) of the Delegated Regulation is as follows:

Proportionality EIOPA

This proportionate approach should also be reflected in the quantitative reporting templates to be submitted. In this case the templates would reflect the amounts used for the last full calculation performed.

Click here to access EIOPA’s Supervisory Statement

Systemic Risk and Macroprudential Policy in Insurance (EIOPA)

In its work, EIOPA followed a step-by-step approach seeking to address the following questions in a sequential way:

  1. Does insurance create or amplify systemic risk?
  2. If yes, what are the tools already existing in the Solvency II framework, and how do they contribute to mitigate the sources of systemic risk?
  3. Are other tools needed and, if yes, which ones could be promoted?

Each paper published addresses one of the questions above. The publication of the three EIOPA papers on systemic risk and macroprudential policy in insurance has constituted an important milestone by which EIOPA has defined its policy stance and laid down its initial ideas on several relevant topics.

This work should now be turned into a specific policy proposal for additional macroprudential tools or measures where relevant and possible as part of the review of Directive 2009/138/EC (the ‘Solvency II5 Review’). For this purpose, and in order to gather the views of stakeholders, EIOPA is publishing this Discussion Paper on systemic risk and macroprudential policy in insurance, which focuses primarily on the third paper, i.e. on potential new tools and measures. Special attention is devoted to the four tools and measures specifically highlighted in the recent European Commission’s Call for Advice to EIOPA.

The financial crisis has shown the need to further consider the way in which systemic risk is created and/or amplified, as well as the need to have proper policies in place to address those risks. So far, most of the discussions on macroprudential policy have focused on the banking sector due to its prominent role in the recent financial crisis.

Given the relevance of the topic, EIOPA initiated the publication of a series of three papers on systemic risk and macroprudential policy in insurance with the aim of contributing to the debate and ensuring that any extension of this debate to the insurance sector reflects the specific nature of the insurance business.

EIOPA followed a step-by-step approach, seeking to address the following questions:

  • Does insurance create or amplify systemic risk? In the first paper entitled ‘Systemic risk and macroprudential policy in insurance’, EIOPA identified and analysed the sources of systemic risk in insurance and proposed a specific macroprudential framework for the sector. If yes, what are the tools already existing in the current framework, and how do they contribute to mitigate the sources of systemic risk? In the second paper, ‘Solvency II tools with macroprudential impact’, EIOPA identified, classified and provided a preliminary assessment of the tools or measures already existing within the Solvency II framework, which could mitigate any of the systemic risk sources that were previously identified.
  • Are other tools needed and, if yes, which ones could be promoted? The third paper carried out an initial assessment of other potential tools or measures to be included in a macroprudential framework designed for insurers. EIOPA focused on four categories of tools (capital and reservingbased tools, liquidity-based tools, exposure-based tools and pre-emptive planning). The paper focuses on whether a specific instrument should or should not be further considered. This is an important aspect in light of future work in the context of the Solvency II review.

The publication of the three EIOPA papers on systemic risk and macroprudential policy in insurance constitutes an important milestone by which EIOPA has defined its policy stance and laid down its initial ideas on several relevant topics. It should be noted that the ESRB (2018) has also identified a shortlist of options for additional provisions, measures and instruments, which reaches broadly similar conclusions as EIOPA.

EIOPA’s work should now be turned into a specific policy proposal for additional macroprudential tools or measures where relevant and possible as part of the Solvency II Review. For this purpose, and in order to gather the views of stakeholders, EIOPA is publishing this Discussion Paper on systemic risk and macroprudential policy in insurance.

This Discussion paper is based on the three papers previously published. They therefore back its content. Interested readers are recommended to consult them for further information or details. Relevant references are included in each of the sections.

EIOPA has included questions on all three papers. The majority of the questions, however, revolve around the third paper on additional tools or measures, which is more relevant in light of the Solvency II review.

The Discussion paper primarily focuses on the “principles” of each tool, trying to explain their rationale. As such, it does not address the operational aspects/challenges of each tool (e.g. calibration, thresholds, etc.) in a comprehensive manner. Similar to the approach followed with other legislative initiatives, the technical details could be addressed by means of technical standards, guidelines or recommendations, once the relevant legal instrument has been enacted.

Definitions

EIOPA provided all relevant definitions in EIOPA (2018a). It has to be noted, however, that there is usually no unique or universal definition for all these concepts. EIOPA’s work did not seek to fill this gap. Instead, working definitions are put forward in order to set the scene and should therefore be considered in the context of this paper only.

  • Financial stability and systemic risk are two strongly related concepts. Financial stability can be defined as a state whereby the build-up of systemic risk is prevented.
  • Systemic risk means a risk of disruption in the financial system with the potential to have serious negative consequences for the internal market and the real economy.
  • Macroprudential policy should be understood as a framework that aims at mitigating systemic risk (or the build-up thereof), thereby contributing to the ultimate objective of the stability of the financial system and, as a result, the broader implications for economic growth.
  • Macroprudential instruments are qualitative or quantitative tools or measures with system-wide impact that relevant competent authorities (i.e. authorities in charge of preserving the stability of the financial system) put in place with the aim of achieving financial stability.

In the context of this paper, these concepts (i.e. tools, instruments and measures) are used as synonyms.

The macroprudential policy approach contributes to the stability of the financial system — together with other policies (e.g. monetary and fiscal) as well as with microprudential policies. Whereas microprudential policies primarily focus on individual entities, the macroprudential approach focuses on the financial system as a whole.

It should be taken into account that, in some cases, the borders between microprudential policies and macroprudential consequences are blurring. That means, for example, that instruments that may have been designed as microprudential instrument may also have macroprudential consequences.

There are different institutional models for the implementation of macroprudential policies across EU, in some cases involving different parties (e.g. ministries, supervisors, etc.). This paper adopts a neutral approach by referring to the generic concept of the ‘relevant authority in charge of the macroprudential policy’, which should encompass the different institutional models existing across jurisdictions. Sometimes a simplified term such as ‘the authorities’ or ‘the competent authorities’ is used.

Systemic risk in insurance

While a common understanding of the systemic relevance of the banking sector has been reached, the issue is still debated in the case of the insurance sector. In order to contribute to this debate, EIOPA developed a conceptual approach to illustrate the dynamics in which systemic risk in insurance can be created or amplified.

Main elements of EIOPA’s conceptual approach to systemic risk

  • Triggering event: Exogenous event that has an impact on one or several insurance companies and may initiate the whole process of systemic risk creation. Examples are macroeconomic factors (e.g. raising unemployment), financial factors (e.g. yield movements) or non-financial factors (e.g. demographic changes or cyber-attacks).
  • Company risk profile: The result of the collection of activities performed by the insurance company. The activities will determine: a) the specific features of the company reflecting the strategic and operational decisions taken; and b) the risk factors that the company is exposed to, i.e. the potential vulnerabilities of the company.
  • Systemic risk drivers: Elements that may enable the generation of negative spill-overs from one or more company-specific stresses into a systemic effect, i.e. they may turn a company specific-stress into a system wide stress.
  • Transmission channels. Contagion channels that explain the process by which the sources of systemic risk may affect financial stability and/or the real economy. EIOPA distinguishes five main transmission channels: a) Exposure channel; b) Asset liquidation channel; c) Lack of supply of insurance products; d) Bank-like channel; and e) Expectations and information asymmetries
  • Sources of systemic risk: they result from the systemic risk drivers and their transmission channels. They are direct or indirect externalities whereby insurance imposes a systemic threat to the wider system. These direct and indirect externalities lead to three potential sources’ categories of systemic risks which are not mutually exclusive, i.e. entity-based related source, activity-based related source and behaviour-based related source.

In essence and as depicted in Figure 1, the approach developed by EIOPA considers that a ‘triggering event’ initially has an impact at entity level, affecting one or more insurers through their ‘risk profile’. Potential individual or collective distresses may generate systemic implications, the relevance of which is determined by the presence of different ‘systemic risk drivers’ embedded in the insurance companies.

EIOPA Sys Risk

In EIOPA’s view, systemic events could be generated in two ways.

  1. The ‘direct’ effect, originated by the failure of a systemically relevant insurer or the collective failure of several insurers generating a cascade effect. This systemic source is defined as ‘entity-based’.
  2. The ‘indirect’ effect, in which possible externalities are enhanced by engagement in potentially systemic activities (activity-based sources) or the widespread common reactions of insurers to exogenous shocks (behaviour-based source).

Potential externalities generated via direct and indirect sources are transferred to the rest of the financial system and to the real economy via specific channels (i.e. the transmission channel) and could induce changes in the risk profile of insurers, eventually generating potential second-round effects.

The following table provides an overview of possible examples of triggering events, risk profile, systemic risk drivers and transmission channels. It should therefore not be considered as a comprehensive list of elements.

EIOPA MacroPrud

Potential macroprudential tools and measures to enhance the current framework

In its third paper, EIOPA (2018c) carried out an analysis focusing on four categories of tools:

a) Capital and reserving-based tools;

b) Liquidity-based tools;

c) Exposure-based tools; and

d) Pre-emptive planning.

EIOPA also considers whether the tools should be used for enhanced reporting and monitoring or as intervention power. Following this preliminary analysis, EIOPA concluded the following :

EIOPA Other tools

Example: Enhancement of the ORSA 

Description. In an ORSA, an insurer is required to consider all material risks that may have an impact on its ability to meet its obligations to policyholders. In doing this a forward looking perspective is also required. Although conceived at first as a microprudential tool, this tool could be enhanced to take the macroprudential perspective also into account.

Potential contribution to mitigate systemic risk. The enhancement of ORSA could help in mitigating two of the sources of systemic risk identified.

Proposal. This measure is proposed for further consideration for enhanced reporting and monitoring purposes.

Operational aspects. A description of all relevant operational aspects is carried out in EIOPA (2018c). In essence, the idea is to supplement the microprudential approach by assigning certain roles and responsibilities to the relevant authority in charge of the macroprudential policy (see Figure below). This authority could carry out three different tasks:

  1. Aggregation of information;
  2. Analysis of the information; and
  3. Provision of certain information or parameters to supervisors to channel macroprudential concerns.

Supervisors would then request undertakings to include in their ORSAs particular macroprudential risks.

Issues for consideration: In order to make the ORSA operational from a macroprudential point of view, the following would be needed:

  • A clarification of the role of the risk management function in order to include macroprudential concerns.
  • The inclusion of a new paragraph in Article 45 of the Solvency II directive explicitly referring to the macroprudential dimension and the need to consider the macroeconomic situation and potential sources of systemic risk as followup of their assessment on whether the company complies on a continuous basis with the Solvency II regulatory capital requirements.
  • Clarification that a follow-up is expected after input from supervisors, namely from authorities in charge of the macroprudential policy. On a risk-based approach this might imply the request of specific information in terms of nature, scope, format and point in time, where justified by likelihood or impact of materialisation of a certain source of systemic risk.

Furthermore, a certain level of harmonisation of the structure and content of the ORSA report would be needed, which would enable the identification of the relevant sections by the authorities in charge of macroprudential policies. This, however, would mean a change in the current approach followed with regard to the ORSA.

Click here to access EIOPA’s detailed Discussion Paper 2019

 

Outsourcing to the Cloud: EIOPA’s Contribution to the European Commission FinTech Action Plan

In the European financial regulatory landscape, the purchase of cloud computing services falls within the broader scope of outsourcing.

The credit institutions, investment firms, payment institutions and the e-money institutions have multiple level 1 and level 2 regulations that discipline their use of outsourcing (e.g. MIFID II, PSD2, BRRD). There are also level 3 measures: CEBS Guidelines on Outsourcing, representing the current guiding framework for outsourcing activities within the European banking sector.

Additional “Recommendations on cloud outsourcing” were issued on December 20, 2017 by the European Banking Authority (EBA) and entered into force on July 1, 2018. They will be repealed by the new guidelines on Outsourcing Arrangements (level 3) which have absorbed the text of the Recommendations.

For the (re)insurance sector, the current Regulatory framework of Solvency II (level 1 and level 2) discipline outsourcing under Articles 38 and 49 of the Directive and Article 274 of the Delegated Regulations. The EIOPA guidelines 60-64 on System of Governance provide level 3 principle based guidance.

On the basis of a survey conducted by the National Supervisory Authorities (NSAs), cloud computing is not extensively used by (re)insurance undertakings: it is most extensively used by newcomers, within a few market niches and by larger undertakings mostly for non-critical functions.

Moreover, as part of their wider digital transformation strategies many European large (re)insurers are expanding their use of the cloud.

As to applicable regulation, cloud computing is considered as outsourcing and the current level of national guidance on cloud outsourcing for the (re)insurance sector is not homogenous. Nonetheless, most NSAs (banking and (re)insurance supervisors at the same time) declare that they are considering the EBA Recommendations as a reference for the management of cloud outsourcing.

Under the steering of its InsurTech TaskForce, EIOPA will develop its own Guidelines on Cloud Outsourcing. The intention is that the Guidelines on Cloud Outsourcing (the “guidelines”) will be drafted during the first half of 2019, issued then for consultation and finalised by the end of the year.

During the process of drafting the Guidelines, EIOPA will organize a public roundtable on the use of cloud computing by (re)insurance undertakings. During the roundtable, representative from the (re)insurance industry, cloud service providers and the supervisory community will discuss views and approaches to cloud outsourcing in a Solvency II and post-EBA Recommendations environment.

Furthermore, in order to guarantee a cross-industry harmonization within the European
financial sector, EIOPA has agreed with the other two ESAs:

  • to continue keeping the fruitful alignment kept so far; and
  • to start – in the second part of 2019 – a joint market monitoring activity aimed at developing policy views on how cloud outsourcing in the finance sector should be treated in the future.

This should take into account the increasing use of the cloud and the potential for large cloud service providers to be a single point of failure.

Overview of Cloud Computing

Cloud computing allows users to access on-demand, shared configurable computing resources (such as networks, servers, storage, applications and services) hosted by third parties on the internet, instead of building their own IT infrastructure.

According to the US National Institute of Standards and Technology (NIST), cloud computing is: “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”.

The ISO standard of 2014 defines cloud computing as a: “paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand”. It is composed of

  • cloud computing roles and activities,
  • cloud capabilities types and cloud service categories,
  • cloud deployment models and
  • cloud computing cross cutting aspects”.

The European Banking Authority (EBA) Recommendations of 2017 – very close to NIST definition – defines the cloud services as: “Services provided using cloud computing, that is, a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Shared responsibility framework

The cloud provider and cloud customer share the control of resources in a cloud system. The cloud’s different service models affect their control over the computational resources and, thus, what can be done in a cloud system. Compared to traditional IT systems, where one organization has control over the whole stack of computing resources and the entire life-cycle of the systems, cloud providers and cloud customers collaboratively

  • design,
  • build,
  • deploy, and
  • operate

cloud based systems.

The split of control means that both parties share the responsibilities in providing adequate protections to the cloud-based systems. The picture below shows, as “conceptual model”, the different level of sharing responsibilities between the cloud provider and the cloud customer.

These responsibilities contribute to achieve a compliant and secure computing environment. It has to be noted that, regardless the service provided by the cloud provider:

  • Ensuring that the data and its classification are done correctly and that the solution is compliant with regulatory obligations is the responsibility of the customer (e.g. in case of data theft the cloud customer is responsible towards the damaged parties or the customer is responsible to ensure – e.g. with specific contractual obligations – that the provider observe certain compliance requirements such as give the competent authorities access and audit rights);
  • Physical security is the one responsibility that is wholly owned by cloud service providers when using cloud computing.

The remaining responsibilities and controls are shared between customers and cloud providers according to the outsourcing model. However, the responsibility (in a supervisory sense) remains with the customers. Some responsibilities require the cloud provider and customer to manage and administer the responsibility together including auditing of their domains. For example, identity & access management when using a cloud provider’s active directory services could require that the configuration of services such as multi-factor authentication is up to the customer, but ensuring effective functionality is the responsibility of the cloud provider.

EIOPA Outs

Summary of Key Takeaways and EIOPA’s Answer to the European Commission

The key takeaways of the analysis carried out and described within this document are the following:

  1. cloud computing is mostly used extensively by newcomers, by a niche of the market and by larger undertakings mostly for non-critical function. However, as part of their wider digital transformation strategies many European large (re)insurers are expanding their use of the cloud;
  2. the current Regulatory framework of Solvency II (level 1 and level 2) appears to be sound to discipline the outsourcing to the cloud by the current outsourcing provisions (Articles 38 and 49 of the Directive and Article 274 of the Delegated Regulations);
  3. cloud computing is a fast developing service so in order for its regulation to be efficient it should be principle-based rather than attempting at regulating all (re)insurance-related aspects of it;
  4. cloud computing services used by (re)insurance undertakings are aligned to the one used by banking sector. The risks arising from the usage of cloud computing by (re)insurance undertakings appear to be, generally, aligned to the risks bear by the banking players with few minor (re) insurance specificities;
  5. both banking and (re)insurance regulations discipline cloud computing by their current outsourcing provisions. Under these, banking and (re)insurance institutions are required to classify whether the cloud services they receive are „critical or important“. The most common approach is to classify cloud computing on a case-by-case approach – similarly to the other services – on the basis of the service / process / activity / data outsourced;
  6. the impact of cloud computing on the (re)insurance market is assessed differently among jurisdictions: due to the complexity and the high level of technicality of the subject, some jurisdictions have planned to issue (or already issued) national guidance directly applicable to the (re)insurance market on cloud outsourcing;
  7. from the gap analysis carried out, the EBA Recommendations are more specific on the subject (e.g. the specific requirements to build a register of all the cloud service providers) and, being built on shared common principles, can be applied to the wide Solvency II regulations on outsourcing, reflecting their status at level 3;
  8. to provide legal transparency to the market participants (i.e. regulated undertakings and service providers) and to avoid potential regulatory arbitrage, EIOPA should issue guidance on cloud outsourcing aligned with the EBA Recommendations and, where applicable, the EBA Guidelines on outsourcing arrangements with minor amendments.

Click here to access EIOPA’s detailed Contribution Paper

The strategies shaping private equity in 2019 and beyond

For the past several years, fund managers have faced virtually the same challenge: how to put record amounts of raised capital to work productively amid heavy competition for assets and soaring purchase price multiples. Top performers recognize that the only effective response is to get better—and smarter.

We’ve identified four ways leading firms are doing so.

  • A growing number of (General Partners) GPs are facing down rising deal multiples by using buy-and-build strategies as a form of multiple arbitrage—essentially scaling up valuable new companies by acquiring smaller, cheaper ones.
  • The biggest firms, meanwhile, are beating corporate competitors at their own game by executing large-scale strategic mergers that create value out of synergies and combined operational strength.
  • GPs are also discovering the power of advanced analytics to shed light on both value and risks in ways never before possible.
  • And they are once again exploring adjacent investment strategies that take advantage of existing capabilities, while resisting the temptation to stray too far afield.

Each of these approaches will require an investment in new skills and capabilities for most firms. Increasingly, however, continuous improvement is what separates the top-tier firms from the rest.

Buy-and-build: Powerful strategy, hard to pull off

While buy-and-build strategies have been around as long as private equity has, they’ve never been as popular as they are right now. The reason is simple: Buy-and-build can offer a clear path to value at a time when deal multiples are at record levels and GPs are under heavy pressure to find strategies that don’t rely on traditional tailwinds like falling interest rates and stable GDP growth. Buying a strong platform company and building value rapidly through well-executed add-ons can generate impressive returns.

As the strategy becomes more and more popular, however, GPs are discovering that doing it well is not as easy as it looks. When we talk about buy-and-build, we don’t mean portfolio companies that pick up one or two acquisitions over the course of a holding period. We also aren’t referring to onetime mergers meant to build scale or scope in a single stroke. We define buy-and-build as an explicit strategy for building value by using a well-positioned platform company to make at least four sequential add-on acquisitions of smaller companies. Measuring this activity with the data available isn’t easy. But you can get a sense of its growth by looking at add-on transactions. In 2003, just 21% of all add-on deals represented at least the fourth acquisition by a single platform company. That number is closer to 30% in recent years, and in 10% of the cases, the add-on was at least the 10th sequential acquisition.

Buy-and-build strategies are showing up across a wide swath of industries (see Figure 2.2). They are also moving out of the small- to middle-market range as larger firms target larger platform companies (see Figure 2.3). They are popular because they offer a powerful antidote to soaring deal multiples. They give GPs a way to take advantage of the market’s tendency to assign big companies higher valuations than smaller ones (see Figure 2.4). A buy-and-build strategy allows a GP to justify the initial acquisition of a relatively expensive platform company by offering the opportunity to tuck in smaller add-ons that can be acquired for lower multiples later on. This multiple arbitrage brings down the firm’s average cost of acquisition, while putting capital to work and building additional asset value through scale and scope. At the same time, serial acquisitions allow GPs to build value through synergies that reduce costs or add to the top line. The objective is to assemble a powerful new business such that the whole is worth significantly more than the parts.

Having coinvested in or advised on hundreds of buy-and-build deals over the past 20 years, we’ve learned that sponsors tend to underestimate what it takes to win. We’ve seen buy-and-build strategies offer firms a number of compelling paths to value creation, but we’ve also seen these approaches badly underperform other strategies. Every deal is different, of course, but there are patterns to success.

The most effective buy-and-build strategies share several important characteristics.

Too many attempts at creating value through buy-and-build founder on the shoals of bad planning. What looks like a slam-dunk strategy rarely is. Winning involves assessing the dynamics at work in a given sector and using those insights to weave together the right set of assets. The firms that get it right understand three things going in:

  • Deep, holistic diligence is critical. In buy-and-build, due diligence doesn’t start with the first acquisition. The most effective practitioners diligence the whole opportunity, not just the component parts. That means understanding how the strategy will create value in a given sector using a specific platform company to acquire a well-defined type of add-on. Are there enough targets in the sector, and is it stable enough to support growth? Does the platform already have the right infrastructure to make acquisitions, or will you need to build those capabilities? Who are the potential targets, and what do they add? Deep answers to questions like these are a necessary prerequisite to evaluating the real potential of a buy-and-build thesis.
  • Execution is as important as the investment. Great diligence leads to a great playbook. The best firms have a clear plan for what to buy, how to integrate it, and what roles fund management and platform company leadership will play. This starts with building a leadership team that is fit for purpose. It also means identifying bottlenecks (e.g., IT systems, integration team) and addressing them quickly. There are multiple models that can work—some rely on extensive involvement from deal teams, while others assume strong platform management will take the wheel. But given the PE time frame, the imperative is to have a clear plan up front and to accelerate acquisition activity during what inevitably feels like a very short holding period.
  • Pattern recognition counts. Being able to see what works comes with time and experience. Learning, however, relies on a conscious effort to diagnose what worked well (or didn’t) with past deals. This forensic analysis should include the choice of targets, as well as how decisions along each link of the investment value chain (either by fund management or platform company management) created or destroyed value. Outcomes improve only when leaders use insights from past deals to make better choices the next time.

At a time when soaring asset prices are dialing up the need for GPs to create value any way they can, an increasing number of firms are turning to buy-and-build strategies. The potential for value creation is there; capturing it requires

  • sophisticated due diligence,
  • a clear playbook,
  • and strong, experienced leadership.

Bain1

Merger integration: Stepping up to the challenge

PE funds are increasingly turning to large-scale M&A to solve what has become one of the industry’s most intractable problems—record amounts of money to spend and too few targets. GPs have put more money to work over the past five years than during any five-year period in the buyout industry’s history. Still, dry powder, or uncalled capital, has soared 64% over the same period, setting new records annually and ramping up pressure on PE firms to accelerate the pace of dealmaking.

One reason for the imbalance is hardly a bad problem: Beginning in 2014, enthusiastic investors have flooded buyout funds with more than $1 trillion in fresh capital. Another issue, however, poses a significant conundrum: PE firms are too often having to withdraw from auctions amid fierce competition from strategic corporate buyers, many of which have a decided advantage in bidding. Given that large and mega-buyout funds of $1.5 billion or more hold two-thirds of the uncalled capital, chipping away at the mountain of dry powder will require more and bigger deals by the industry’s largest players (see Figure 2.6). Very large public-to-private transactions are on the rise for precisely this reason.

But increasingly, large funds are looking to win M&A deals by recreating the economics that corporate buyers enjoy. This involves using a platform company to hunt for large-scale merger partners that add strategic value through scale, scope or both.

Making it all work, of course, is another matter. Large-scale, strategic M&A solves one problem for large PE firms by putting a lot of capital to work at once, but it also creates a major challenge: capturing value by integrating two or more complex organizations into a bigger one that makes strategic and operational sense. Bain research shows that, while there is clear value in making acquisitions large enough to have material impact on the acquirer, the success rate is uneven and correlates closely to buyer experience (see Figure 2.7). The winners do this sort of deal relatively frequently and turn large-scale M&A into a repeatable model. The laggards make infrequent big bets, often in an attempt to swing for the fences strategically. Broken deals tend to fail because firms stumble over merger integration. They enter the deal without an integration thesis or try to do everything at once. They don’t identify synergies with any precision, or fail to capture the ones they have identified. GPs neglect to sort out leadership issues soon enough, or they underestimate the challenge of merging systems and processes. For many firms, large-scale merger integration presents a steep learning curve.

In our experience, success in a PE context requires a different way of approaching three key phases of the value-creation cycle:

  • due diligence,
  • the post-announcement period
  • and the post-close integration period (see Figure 2.8).

In many ways, what happens before the deal closes is almost as important as what happens after a firm assumes ownership. Top firms invest in deep thinking about integration from the outset of due diligence. And they bring a sharp focus to how the firm can move quickly and decisively during the holding period to maximize time to value.

In a standalone due diligence process, deal teams focus on a target’s market potential, its competitiveness, and opportunities to cut costs or improve performance. In a merger situation, those things still matter, but since the firm’s portfolio company should have a good understanding of the market already, the diligence imperative switches to a bottom-up assessment of the potential synergies:

  • Measuring synergies. Synergies typically represent most of a merger deal’s value, so precision in underwriting them is critical. High-level benchmarks aren’t sufficient; strong diligence demands rigorous quantification. The firm has to decide which synergies are most important, how much value they represent and how likely they are to be captured within the deal’s time frame. A full understanding of the synergies available in a deal like this allows a firm to bid as aggressively as possible. It often gives the deal team the option to share the value of synergies with the seller in the form of a higher acquisition price. On the other hand, the team also needs to account for dis-synergies—the kinds of negative outcomes that can easily lead to value destruction.
  • Tapping the balance sheet. One area of potential synergies often underappreciated by corporate buyers is the balance sheet. Because companies in the same industry frequently share suppliers and customers, combining them presents opportunities to negotiate better contracts and improve working capital. There might also be a chance to reduce inventory costs by pooling inventory, consolidating warehouses or rationalizing distribution centers. At many target companies, these opportunities represent low-hanging fruit, especially at corporate spin-offs, since parent companies rarely manage the working capital of individual units aggressively. Combined businesses can also trim capital expenditures.
  • Managing the “soft” stuff. While these balance sheet issues play to a GP’s strong suit, people and culture issues usually don’t. PE firms aren’t known for their skill in diagnosing culture conflicts, retaining talent or working through the inevitable HR crises raised by integration. Firms often view these so-called soft issues as secondary to the things they can really measure. Yet people problems can quickly undermine synergies and other sources of value, not to mention overall performance of the combined company. To avoid these problems, it helps to focus on two things in due diligence. First, which of the target company’s core capabilities need to be preserved, and what will it take to retain the top 10 people who deliver them? Second, does the existing leadership team—on either side of the transaction—understand how to integrate a business? The firm needs to know whether those responsible for leading the integration have done it before, whether they’ve been successful and whether the firm can trust them to do it successfully in this situation. PE owners are often more involved in integration than the board of a typical corporation. It’s important not to overstep, however. Bigfooting the management team is a sure way to spur a talent exodus. For PE firms eager to put money to work, great diligence in a merger context is critical. It should not only answer questions such as “How much value can we underwrite?” but also evaluate whether to do the deal at all. Deal teams have to resist the urge to make an acquisition simply because the clock is ticking. Corporate buyers often take years to identify and court the right target. While it’s true that PE firms rarely have that luxury, no amount of merger integration prowess can make up for acquiring a company that just doesn’t fit.

Once the hard work of underwriting value and generating a robust integration thesis is complete, integration planning begins in earnest. A successful integration has three major objectives:

  • capturing the identified value,
  • managing the people issues,
  • and integrating processes and systems (see Figure 2.9).

This is where the Integration Management Office (IMO) needs to shine. As the central leadership office, its role is to keep the integration effort on track and to hit the ground running on day one. Pre- and post-close, the IMO

  • monitors risks (including interdependences),
  • tracks and reports on team progress,
  • resolves conflicts,
  • and works to achieve a consistent drumbeat of decisions and outcomes.

It manages dozens of integration teams, each with its own detailed work plan, key performance indicators and milestones. It also communicates effectively to all stakeholders.

  • Capturing value. An often-underappreciated aspect of the early merger integration process is the art of maintaining continuity in the base business. Knitting together the two organizations and realizing synergies is essential, but value can be lost quickly if a chaotic integration process gets in the way of running the core. Management needs to reserve focus for day-to-day operations, keeping close tabs on customers and vendors, and intervening quickly if problems crop up. At the same time, it is important to validate and resize the value-creation initiatives and synergies identified in diligence. The team has to create a new value roadmap that articulates in detail the value available and how to capture it. This document redefines the size of the prize based on real data. It should be cascaded down through the organization to inform detailed team-level work plans.
  • Tackling the people challenge. Integrating large groups of people is very often the most challenging— and overlooked—aspect of bringing two companies together. Mergers are emotionally charged events that take employees out of their comfort zone. While top leadership may be thinking about pulling the team together to find value, the people on the ground, understandably, are focused on what it means for them. The change disrupts everybody; nobody knows what’s coming, and human nature being what it is, people often shut down. Getting ahead of potential disaster involves three critical areas of focus:
    • retaining key talent,
    • devising a clear operating model
    • and solving any culture issues.

Talent retention boils down to identifying who creates the most value at the company and understanding what motivates them. Firms need to isolate the top 50 to 100 individuals most responsible for the combined company’s value and devise a retention plan tailored to each one. Keeping these people on board will likely involve financial incentives, but it may be more important to present these stars with a clear vision for the future and how they can bring it to life by excelling in mission-critical roles. It is also essential to be decisive and fair in making talent decisions (see Figure 2.10). Assigning these roles is an outgrowth of a larger challenge: devising a fit-for-purpose operating model that aligns with the overall vision for the company. This is the set of organizational elements that helps translate business strategy into action. It defines roles, reporting relationships and decision rights, as well as accountabilities. Whether this new model works will have a lot to do with how well leadership manages the cultural integration challenge. Nothing can destroy value faster than internal dysfunction, but getting it right can be a delicate exercise.

  • Processes and systems. The final integration imperative—designing and implementing the new company’s processes and systems—is all about anticipating how things will get done in the new company and building the right infrastructure to support that activity. PE firms must consider which processes to integrate and which to leave alone. The north star on these decisions is which efforts will directly accrue to value within the deal time frame and which can wait. Often, this means designing an interim and an end-state solution, ensuring delivery of critical functionality now while laying the foundation for the optimal long-term solution. Integrating IT systems requires a similar decision-making process, focused on what will create the most value. If capturing synergies in the finance department involves cutting headcount within several financial planning and analysis teams, that might only happen when they are on a single system. Likewise, if the optimal operating model calls for a fully integrated sales and marketing team, then working from a single CRM system makes sense. Most PE firms are hyperfocused on the expense involved in these sorts of decisions. They weigh the onetime costs of integration against a sometimes-vague potential return and ultimately decide not to push forward. This may be a mistake. Taking a more expansive view of potential value often pays off. Early investments in IT, for instance, may look expensive in the short run. But to the extent that they make possible future investments in better capabilities or continued acquisitions, they can be invaluable.

Bain2

Bain3

Adjacency strategy: Taking another shot at diversification

Given the amount of capital gushing into private equity, it’s not surprising that PE firms are diversifying their fund offerings by launching new strategies. The question is whether this wave of diversification can produce better results than the last one. History has shown that expanding thoughtfully into the right adjacencies can deliver great results. But devoting time, capital and talent to strategies that stray too far afield can quickly sap performance.

In the mid-1990s, the industry faced a similar challenge in putting excess capital to work. As institutions and other large investors scoured the investment landscape for returns, they increased allocations to alternative investments, including private equity. Larger PE funds eagerly took advantage of the situation by branching into different geographies and asset classes. This opened up new fee and revenue streams, and allowed the funds to offer talented associates new opportunities. Funds first expanded geographically, typically by crossing the Atlantic from the US to Europe, then extending into Asia and other regions by the early 2000s (see Figure 2.12). Larger firms next began to experiment with asset class diversification, creating

  • growth and venture capital funds,
  • real estate funds,
  • mezzanine financing
  • and distressed debt vehicles.

Many PE firms found it more challenging to succeed in new geographies and especially in different asset classes. Credit, infrastructure, real estate and hedge funds held much appeal, in part because they were less correlated with equity markets and offered new pools of opportunity. But critically, most of these asset classes also required buyout investors to get up to speed on very different capabilities, and they offered few synergies. Compared with buyouts, most of these adjacent asset classes had a different investment thesis, virtually no deal-sourcing overlap, little staff or support-function cost sharing, and a different Limited Partners (LP) risk profile. To complicate matters, PE firms found that many of these adjacencies offered lower margins than their core buyout business. Some came with lower fees, and others did not live up to performance targets. Inherently lower returns for LPs made it difficult to apply the same fee structures as for traditional buyouts. To create attractive total economics and pay for investment teams, PE firms needed to scale up some of these new products well beyond what they might do in buyouts. That, in turn, threatened to change the nature of the firm.

For large firms that ultimately went public, like KKR, Blackstone and Apollo, the shift in ownership intensified the need to produce recurring, predictable streams of fees and capital gains. Expanding at scale in different asset classes became an imperative. And today, buyouts represent a minority of their assets under management.

As other firms pursued diversification, however, the combination of different capabilities and lower returns wasn’t always worth the trade-off. When the global financial crisis hit, money dried up, causing funds to retrench from adjacencies that did not work well—either because of a lack of strategic rationale or because an asset class struggled overall. Of the 100 buyout firms that added adjacencies before 2008 (roughly 1 in 10 firms active then), 20% stopped raising capital after the crisis, and nearly 65% of those left had to pull out from at least one of their asset classes (see Figure 2.13).

Diversification, it became clear, was trickier to navigate than anticipated. Succeeding in any business that’s far from a company’s core capabilities presents a stiff challenge—and private equity is no different. To test this point, we looked at a sample of funds launched between 1998 and 2013 by 184 buyout firms for which we had performance data, each of which had raised at least $1.5 billion during that period. We found that, when it comes to maintaining a high level of returns, staying close to the core definitely matters. Our study defined “core/near-in” firms as those that dedicated at least 90% of their raised capital to buyouts and less than 5% to adjacencies (including infrastructure, real estate and debt). We compared them to firms that moved further away from the core (dedicating more than 5% to adjacencies). The results: On average, 28% of core/near-in firms’ buyout funds generated top-quartile IRR performance, vs. 21% for firms that moved further afield (see Figure 2.14). The IRR gap for geographic diversification is more muted, because making such moves is generally easier than crossing asset types. But expanding into a new country or region does require developing or acquiring a local network, as well as transferring certain capabilities. And the mixed IRR record that we identified still serves as a caution: Firms need to be clear on what they excel at and exactly how their strengths could transfer to adjacent spaces.

With a record amount of capital flowing into private equity in recent years, GPs again face the question of how to deploy more capital through diversification. While a few firms, such as Hellman & Friedman, remain fully committed to funding their core buyout strategy, not many can achieve such massive scale in one asset class. As a result, a new wave of PE products is finding favor with both GPs and LPs. Top performers are considering adjacencies that are one step removed from the core, rather than two or three steps removed. The best options take advantage of existing platforms, investment themes and expertise. They’re more closely related to what PE buyout firms know how to do, and they also hold the prospect of higher margins for the GP and better net returns for LPs. In other words, these new products are a different way to play a familiar song.

There are any number of ways for firms to diversify, but several stand out in today’s market (see Figure 2.15):

  • Long-hold funds have a life span of up to 15 years or so, offering a number of benefits. Extending a fund’s holding period allows PE firms to better align with the longer investment horizon of sovereign wealth funds and pension funds. It also provides access to a larger pool of target companies and allows for flexibility on exit timing with fewer distractions. These funds represent a small but growing share of total capital.
  • Growth equity funds target minority stakes in growing companies, usually in a specific sector such as technology or healthcare. Though the field is getting more crowded, growth equity has been attractive given buyout-like returns, strong deal flow and less competition than for other types of assets. Here, a traditional buyout firm can transfer many of its core capabilities. Most common in Asia, growth equity has been making inroads in the US and Europe of late.
  • Sector funds focus exclusively on one sector in which the PE firm has notable strengths. These funds allow firms to take advantage of their expertise and network in a defined part of the investing landscape.
  • Mid-market funds target companies whose enterprise value typically ranges between $50 million and $500 million, allowing the firm to tap opportunities that would be out of scope for a large buyout fund.

All of the options described here have implications for a PE firm’s operating model, especially in terms of retaining talent, communicating an adjacency play to LPs, avoiding cannibalization of the firm’s traditional buyout funds and sorting out which deal belongs in which bucket.

GPs committed to adjacency expansion should ask themselves a few key questions:

  • Do we have the resident capabilities to execute well on this product today, or can we add them easily?
  • Does the asset class leverage our cost structure?
  • Do our customers—our LPs—want these new products?
  • Can we provide the products through the same channels?
  • Have we set appropriate expectations for the expansion, both for returns and for investments?

Clear-eyed answers to these questions will determine whether, and which, adjacencies make sense. The past failures and retrenchments serve as a reminder that investing too far afield risks distracting GPs from their core buyout funds. Instead, a repeatable model consists of understanding which strengths a fund can export and thoughtfully mapping those strengths to the right opportunities (see Figure 2.16).

Adjacency expansion will remain a popular tack among funds looking for alternative routes to put their capital to work. Funds that leverage their strengths in a disciplined, structured way stand the best chance of reaping healthy profits from expansion.

Bain4

Bain5

Advanced analytics: Delivering quicker and better insights

At a time when PE firms face soaring asset prices and heavy competition for deals, advanced analytics can help them derive the kinds of proprietary insights that give them an essential edge against rivals. These emerging technologies can offer fund managers rapid access to deep information about a target company and its competitive position, significantly improving the firm’s ability to assess opportunities and threats. That improves the firm’s confidence in bidding aggressively for companies it believes in—or walking away from a target with underlying issues.

What’s clear, however, is that advanced analytics isn’t for novices. Funds need help in taking advantage of these powerful new tools. The technology is evolving rapidly, and steady innovation creates a perplexing array of options. Using analytics to full advantage requires staying on top of emerging trends, building relationships with the right vendors, and knowing when it makes sense to unleash teams of data scientists, coders and statisticians on a given problem. Bain works with leading PE firms to sort through these issues, evaluate opportunities and build effective solutions. We see firms taking advantage of analytics in several key areas.

Many PE funds already use scraping tools to extract and analyze data from the web. Often, the goal is to evaluate customer sentiment or to obtain competitive data on product pricing or assortment. New tools make it possible to scrape the web much more efficiently, while gaining significantly deeper insights. Deployed properly, they can also give GPs the option to build proprietary databases over time by gathering information daily, weekly or at other intervals. Using a programming language such as Python, data scientists can direct web robots to search for and extract specific data much more quickly than in the past (see Figure 2.17). With the right code and the right set of target websites, new tools can also allow firms to assemble proprietary databases of historical information on pricing, assortment, geographic footprint, employee count or organizational structure. Analytics tools can access and extract visible and hidden data (metadata) as frequently as fund managers find useful.

Most target companies these days sell through online channels and rely heavily on digital marketing. Fewer do it well. The challenge for GPs during due diligence is to understand quickly if a target company could use digital technology more effectively to create new growth opportunities. Post-acquisition, firms often need similar insights to help a portfolio company extract more value from its digital marketing strategy. Assessing a company’s digital positioning—call it a digital X-ray—is a fast and effective way to gain these insights. For well-trained teams, it requires a few hours to build the assessment, and it can be done from the outside in—before a fund even bids. It is also relatively easy to ask for access to a target company’s Google AdWords and Google Analytics platforms. That can produce a raft of digital metrics and further information on the target’s market position.

One challenge for PE funds historically has been accessing data from large networks or from scattered and remote locations. But new tools let deal teams complete such efforts in a fraction of the time and cost.

One issue that PE deal teams often ponder in evaluating companies is traffic patterns around retail networks, manufacturing facilities and transport hubs. Is traffic rising or declining? What’s the potential to increase it? In some industries, it’s difficult to track such data, especially for competitors. But high-definition satellite images or drones can glean insights from traffic flows over time.

Another advantage of analytics tools is the ability to see around corners, helping fund managers anticipate how disruptive new technologies or business models may change the market. Early signs of disruption are notoriously hard to quantify. Traditional measures such as client satisfaction or profitability won’t ring the warning bells soon enough. Even those who know the industry best often fail to anticipate technological disruptions. With access to huge volumes of data, however, it’s easier to track possible warning signs, such as the level of innovation or venture capital investment in a sector. That’s paved the way for advanced analytics tools that allow PE funds to spot early signals of industry disruption, understand the level of risk and devise effective responses. These insights can be invaluable, enabling firms to account for disruption as they formulate bidding strategies and value-creation plans.

These are just a few of the ways that PE firms can apply advanced analytics to improve deal analysis and portfolio company performance. We believe that the burst of innovation in this area will have profound implications for how PE funds go about due diligence and manage their portfolio companies. But most funds will need to tap external expertise to stay on top of what’s possible. A team-based approach that assembles the right expertise for a given problem helps ensure that advanced analytics tools deliver on their promise.

Bain6

Click here to access Bain’s Private Equity Report 2019