The Global Risks Landscape 2019

Is the world sleepwalking into a crisis? Global risks are intensifying but the collective will to tackle them appears to be lacking. Instead, divisions are hardening. The world’s move into a new phase of strongly state-centred politics, noted in last year’s Global Risks Report, continued throughout 2018. The idea of “taking back control”— whether domestically from political rivals or externally from multilateral or supranational organizations — resonates across many countries and many issues. The energy now expended on consolidating or recovering national control risks weakening collective responses to emerging global challenges. We are drifting deeper into global problems from which we will struggle to extricate ourselves.

During 2018, macroeconomic risks moved into sharper focus. Financial market volatility increased and the headwinds facing the global economy intensified. The rate of global growth appears to have peaked: the latest International Monetary Fund (IMF) forecasts point to a gradual slowdown over the next few years. This is mainly the result of developments in advanced economies, but projections of a slowdown in China—from 6.6% growth in 2018 to 6.2% this year and 5.8% by 2022—are a source of concern. So too is the global debt burden, which is significantly higher than before the global financial crisis, at around 225% of GDP. In addition, a tightening of global financial conditions has placed particular strain on countries that built up dollar-denominated liabilities while interest rates were low.

Geopolitical and geo-economic tensions are rising among the world’s major powers. These tensions represent the most urgent global risks at present. The world is evolving into a period of divergence following a period of globalization that profoundly altered the global political economy. Reconfiguring the relations of deeply integrated countries is fraught with potential risks, and trade and investment relations among many of the world’s powers were difficult during 2018.

Against this backdrop, it is likely to become more difficult to make collective progress on other global challenges—from protecting the environment to responding to the ethical challenges of the Fourth Industrial Revolution. Deepening fissures in the international system suggest that systemic risks may be building. If another global crisis were to hit, would the necessary levels of cooperation and support be forthcoming? Probably, but the tension between the globalization of the world economy and the growing nationalism of world politics is a deepening risk.

Environmental risks continue to dominate the results of our annual Global Risks Perception Survey (GRPS). This year, they accounted for three of the top five risks by likelihood and four by impact. Extreme weather was the risk of greatest concern, but our survey respondents are increasingly worried about environmental policy failure: having fallen in the rankings after Paris, “failure of climate-change mitigation and adaptation” jumped back to number two in terms of impact this year. The results of climate inaction are becoming increasingly clear. The accelerating pace of biodiversity loss is a particular concern. Species abundance is down by 60% since 1970. In the human food chain, biodiversity loss is affecting health and socioeconomic development, with implications for well-being, productivity, and even regional security.

Technology continues to play a profound role in shaping the global risks landscape. Concerns about data fraud and cyber-attacks were prominent again in the GRPS, which also highlighted a number of other technological vulnerabilities: around two-thirds of respondents expect the risks associated with fake news and identity theft to increase in 2019, while three-fifths said the same about loss of privacy to companies and governments. There were further massive data breaches in 2018, new hardware weaknesses were revealed, and research pointed to the potential uses of artificial intelligence to engineer more potent cyberattacks. Last year also provided further evidence that cyber-attacks pose risks to critical infrastructure, prompting countries to strengthen their screening of cross-border partnerships on national grounds.

The importance of the various structural changes that are under way should not distract us from the human side of global risks. For many people, this is an increasingly anxious, unhappy and lonely world. Worldwide, mental health problems now affect an estimated 700 million people. Complex transformations— societal, technological and work-related—are having a profound impact on people’s lived experiences. A common theme is psychological stress related to a feeling of lack of control in the face of uncertainty. These issues deserve more attention: declining psychological and emotional wellbeing is a risk in itself—and one that also affects the wider global risks landscape, notably via impacts on social cohesion and politics.

Another set of risks being amplified by global transformations relate to biological pathogens. Changes in how we live have increased the risk of a devastating outbreak occurring naturally, and emerging technologies are making it increasingly easy for new biological threats to be manufactured and released either deliberately or by accident. The world is badly under-prepared for even modest biological threats, leaving us vulnerable to potentially huge impacts on individual lives, societal well-being, economic activity and national security. Revolutionary new biotechnologies promise miraculous advances, but also create daunting challenges of oversight and control—as demonstrated by claims in 2018 that the world’s first genemodified babies had been created.

Rapidly growing cities and ongoing effects of climate change are making more people vulnerable to rising sea levels. Two-thirds of the global population is expected to live in cities by 2050 and already an estimated 800 million people live in more than 570 coastal cities vulnerable to a sea-level rise of 0.5 metres by 2050. In a vicious circle, urbanization not only concentrates people and property in areas of potential damage and disruption, it also exacerbates those risks— for example by destroying natural sources of resilience such as coastal mangroves and increasing the strain on groundwater reserves. Intensifying impacts will render an increasing amount of land uninhabitable. There are three main strategies for adapting to rising sea-levels:

  1. engineering projects to keep water out,
  2. naturebased defences,
  3. and peoplebased strategies, such as moving households and businesses to safer ground or investing in social capital

to make flood-risk communities more resilient.

In this year’s Future Shocks section, we focus again on the potential for threshold effects that could trigger dramatic deteriorations and cause cascading risks to crystallize with dizzying speed. Each of the 10 shocks we present is a “what-if” scenario—not a prediction, but a reminder of the need to think creatively about risk and to expect the unexpected. Among the topics covered this year are

  • quantum cryptography,
  • monetary populism,
  • affective computing
  • and the death of human rights.

In the Risk Reassessment section, experts share their insights about how to manage risks. John Graham writes about weighing the trade-offs between different risks, and András Tilcsik and Chris Clearfield write about how managers can minimize the risk of systemic failures in their organizations.

And in the Hindsight section, we revisit three of the topics covered in previous reports:

  • food security,
  • civil society
  • and infrastructure investment.

wef1

wef2

click here to access wef-mmc-zurich’s global risks report 2019

 

Transform Your Business With Operational Decision Automation

Decisioning Applications Bring The Value Of Operational Decisions To Light

Businesses face the imperative to transform business from analog to digital due to intense competition for increasingly demanding and digitally connected customers. The imperative to transform has ushered in a new era of decisioning applications in which every operational decision an organization makes can be considered a business asset. New applications inform and advance customer experience and drive operational actions in real time through automation. These applications are at the forefront of the effort to streamline operations and help organizations take the right action at the right time near-instantaneously.

Achieve Digital Goals With Automated Operational Decision Making

Automating decision life cycles allows firms to manage the fast changes required in increasingly digitized business processes. Automation of operational decisions is crucial to meeting digital goals: More than three-quarters of decision makers say it is important to their digital strategy —and close to half say it is very important.

« The Share of Decisions that are Automated will Increase Markedly in two Years »

The importance of automated operational decision making to digital strategy will lead to a sharp increase of automation in the near term. Today, about one-third of respondents say they have the majority of their operational decisions fully or partially automated. In two years, that group will double.

Use Cases For Automated Decisions Span The Customer Lifecycle But Current Focus Is On Early Stages

To improve the operational aspects of customer experience —and to reap the business benefits that come with delighting customers — firms align automated decision use cases to the customer lifecycle. At least some firms have expanded their share of automated operational decision making to include touchpoints across the customer lifecycle, from the discover phase all the way to the engage phase. However, our survey found that the majority have yet to implement automated decisions as fully in later stages.

Top Challenges Will Intensify With Rapid Expansion Of New Decisioning Tools

Firms are experiencing middling success with current decision automation tools. Only 22% are very satisfied with their decisioning software today. Misgivings with today’s tools include inability to integrate with current systems or platforms, high cost, and lack of consistency across channels and processes.

The growth of real-time automation use cases and the number of technologies brought on to handle them will exacerbate existing challenges with complexity and cost.

Decision Makers Recognize High Value In Decisioning Platforms That Work In Real Time

Decision makers face significant implementation and cost challenges on their path to automated operational decisions. As a result, getting the greatest business value for the power of their automation tools is top of mind.

« Eighty-one percent of Decision Makers say a Platform with Real-Time Decision-to-Action Cycles would be Valuable or Very Valuable to Achieving Digital Transformation Goals. »

With better, targeted decisions based on real-time analytics, companies have the potential to acquire better customers, improve the operations that serve them, and retain them longer.

decision automatization

click here to access forresters’s research report

How to seize the Open Banking opportunity

What is Open Banking and why does it matter?

The UK has long been recognised as a global leader in banking. The industry plays a critical role domestically, enabling the day-to-day flow of money and management of risk that are essential for individuals and businesses.

It is also the most internationally competitive industry in the UK, providing the greatest trade surplus of any exporting industry. The UK has a mature and sophisticated banking market with leading Banks, FinTechs and Regulators. However, with fundamental technological, demographic, societal and political changes underway, the industry needs to transform itself in order to effectively serve society and remain globally relevant.

The industry faces a number of challenges. These include the fact that banking still suffers from a poor reputation and relatively low levels of trust when compared to other industries. Many of the incumbents are still struggling to modernise their IT platforms and to embrace digital in a way that fundamentally changes the cost base and the way customers are served.

There are also growing service gaps in the industry, with 16m people trapped in the finance advice gap. In the face of these challenges, Open Banking provides an opportunity to

  • open up the banking industry,
  • ignite innovation to tackle some of these issues
  • and radically enhance the public’s interaction and experience with the financial services industry.

A wave of new challenger banks have entered the market with these opportunities at the heart of their propositions. However, increased competition is no longer the only objective of Open Banking.

Open Banking regulation has evolved from the original intent

The UK started introducing an Open Banking Standard in 2016 to make the banking sector work harder for the benefit of consumers. The implementation of the standard was guided by recommendations from the Open Banking Working Group, made up of banks and industry groups and co-chaired by the Open Data Institute and Barclays. It had a focus on how data could be used to “help people to transact, save, borrow, lend and
invest their money”. The standard’s framework sets out how to develop a set of standards, tools, techniques and processes that will stimulate competition and innovation in the country’s financial sector.

While the UK was developing Open Banking, the European Parliament adopted the revised payment services directive (PSD2) to make it easier, faster, and less expensive for customers to pay for goods and services, by promoting innovation (especially by third-party providers). PSD2 acknowledges the rise of payment-related FinTechs and aims to create a level playing field for all payment service providers while ensuring enhanced security and strong customer protection. PSD2 requires all payment account providers across the EU to provide third-party access.

open banking 1

While this does not require an open standard, PSD2 does provide the legal framework within which the Open Banking standard in the UK and future efforts at creating other national Open Banking standards in Europe will have to operate. The common theme within these initiatives is the recognition that individual customers have the right to provide third parties with access to their financial data. This is usually done in the name of

  • increased competition,
  • accelerating technology development of new products and services,
  • reducing fraud
  • and bringing more people into a financially inclusive environment.

Although the initial objectives of the Open Banking standards were to increase competition in banking and increase current account switching, the intent is continuingly evolving with a broader focus on areas including:

  • reduced overdraft fees,
  • improved customer service,
  • greater control of data
  • and increased financial inclusion.

Whilst there is little argument that the UK leads the way in Open Banking, it is by no means doing so alone. Many other countries are looking carefully at the UK experience to understand how a local implementation might benefit from some of the issues experienced during the UK’s preparation and ‘soft launch’ in January 2018. There are many informal networks around the world, which link regulators, FinTechs and banks to facilitate the sharing of information from one market to another. Countries around the world are at various stages of maturity in implementing Open Banking. The UK leads as the only country to have legislated and built a development framework to support the regulations, enabling it to be advanced in bringing new products and services to market as a result. However, a number of other countries are progressing rapidly towards their own development of Open Banking. In a second group sit the EU, Australia and Mexico, which have taken significant steps in legislation and implementation. Canada, Hong Kong, India, Japan, New Zealand, Singapore, and the US are all making progress in preparing their respective markets for Open Banking initiatives.

open banking 2

One danger in any international shift in thinking, such as Open Banking, is that technology overtakes the original intention. The ‘core technology’ here is open APIs and they feature in all the international programmes, even when an explicit ‘Open Banking’ label is not applied. In a post-PSD2 environment, the primary responsibility for security risks will lie with payment service providers. Vulnerability to data security breaches may increase in line with the number of partners interacting via the APIs.

The new EU General Data Protection Regulation (GDPR) requires protecting customer data privacy as well as capturing and evidencing customer consent, with potentially steep penalties for breaches. Payment service providers must therefore ensure that comprehensive security measures are in place to protect the confidentiality and integrity of customers’ security credentials, assets and data.

open banking 3

click here to access pwc’s detailed report

Banks sailing in uncertain waters

The decision-making process apparent paradox

Corporate decision-making processes are driven by seemingly opposing forces.

On the  one hand, the human urge to dispose of instruments emerges in order

  • to understand context, specific self-direction
  • and to implement the actions required for following the plotted course.

On the other hand, the exhortation to keep the mind open

  • to an array of possible future scenarios,
  • to imagine and grasp the implications of the various possible trajectories,
  • to plot alternative courses according to the obstacles and opportunities encountered, that could lead to landing places other than those contemplated originally.

Needs that are intertwined as never before whenever the decision-maker operates in an area such as the banking sector, that is characterised by extremely pervasive regulatory requirements concerning the

  • maintenance and use of capital,
  • liquidity management,
  • checks on lending and distribution policies,

and that is structurally exposed to the volatility of the macroeconomic context and financial markets, greatly increasing the range of possible scenarios.

Thus, it is far from surprising or infrequent that one of the most common questions that CEOs ask the technical structures responsible for budgeting and risk planning is: ‘what if’? (‘what would happen if…?’). The problem is that, in the last few years, the ‘ifs’ at hand have rapidly multiplied, as there has been an exponential increase in the controlling variables for which feedback is required:

  • Net Interest Income (NII);
  • Cost Income ratio (C/I);
  • Return On Equity (ROE);
  • Non Performing Exposure (NPE) Ratio;
  • Liquidity Coverage Ratio (LCR);
  • Expected Credit Loss (ECL);
  • Common Equity Tier 1 (CET1) ratio,

to cite but a few among the most widespread. Planning has turned into an interdisciplinary and convoluted exercise, an issue hard to solve for CFOs and CROs in particular (naturally, should they not operate in close cooperation).

This greater complexity can result in the progressive loss of quality of the banks’ decision-making process, more often than not based on an incomplete information framework, whenever some controlling variables are unavailable, or even incorrect when there is an actual lack of information, specialist expertise and/or the instruments required for the modelling of events.

Partial mitigating circumstances include the fact that such events, aside from being numerous, are interdependent in their impact on the bank’s results and are particularly heterogeneous. These can in fact be exogenous (turbulence and interference along the way) or endogenous (the actions that the helmsman and the crew implement during navigation).

In the first case, these events are beyond the control of those responsible for the decision-making process, determined by the evolution of the market conditions and/or the choices of institutional subjects. As such, they are often hard to predict in their likelihood of occurrence, intensity, timing and duration. By nature, such phenomena are characterised by complex interactions, that make it crucial, albeit arduous, to comprehend the cause-effect mechanisms governing them. Lastly, their relevance is not absolute, but relative, in that it depends on the degree of reactivity of the bank’s business model and budgetary structure to the single risk factors to which the market value of the banks’ assets is exposed.

Conversely, in the case of endogenous events, uncertainty is more correlated to the ability of the bank’s top management

  • to quantify the level of ambition of the business actions,
  • to assess their multiple implications,
  • and specifically, to the bank’s actual ability to implement them within requested time frames and terms.

The taxonomy of banking strategic planning

Although these complexities are increasingly obvious, many banks still remain convinced about getting started on their respective courses with certainty, exposing themselves to a range of risks that can restrict or irreversibly compromise the efficacy of the decision-making processes. Some institutions are indeed persuaded that an ‘expert-based’ approach that has always characterised their planning methodologies shall continue to be sufficient and appropriate for steering the bank, also in future.

History teaches us that things have not always worked out that way. These actors have yet to understand that it has now become vital to foster the evolution of the planning process towards a model relying upon analytical methodologies and highly sophisticated and technological instruments (risk management, econometrics, statistics, financial engineering, …), making them available to those that have always considered experience, business knowledge and budgetary dynamics to be privileged instruments for making decisions.

Second mistake: many banks believe the uncertainty analysis to be wasteful and redundant for the purposes of planning since, ultimately, the allocation of objectives is (and will remain) based on assumptions and uniquely identified scenarios. In this case, the risk lies in failing to understand that, in actual fact, a broader analysis of possible scenarios contributes to better delineating the assigned objectives, by separating the external conditions from the contribution provided by internal actions. Moreover, testing various hypotheses and combinations of cases makes it easier to calibrate the ‘level of managerial ambition’, in line with the actual potential of the organisational structure and with the full involvement of the business functions responsible for attaining the corporate objectives.

The intersection of these two misreadings of the context results in a different positioning of the bank, with the relative risks and opportunities.

Models

ILLUMINATED

The planning process is built upon analytical data and models developed with the contribution of subject matter experts of different origins, which allows to consider the impacts of a specific scenario on the bank’s budget simultaneously and coherently. Nevertheless, not only does it improve the planning of a specific item, but it disposes of appropriate instruments to switch to a multi-scenario perspective and investigate the relevant scenarios for management, appraising the volatility regarding the expected results. This transition is extremely delicate: it entails a change in the way prospective information is produced by the technical functions and subsequently channelled to the top management and board of directors. In this context, the bank is governed via the analysis of deterministic scenarios and the statistical analysis of the probability distributions of the variables of interest. Leveraging this set of information (much more abundant and articulated than the traditional one) targets, risk propensity levels and relative alert and tolerance thresholds are established; business owners are provided not only with the final objectives, but also with details concerning the key risk factors (endogenous and exogenous alike) that might represent critical or success factors and the respective probabilities of occurrence.

DELUDED

The budget planning process is characterised by the prevalence of an expert-based approach (with a limited capacity of integrating quantitative models and methodologies, in that not always all budget items are developed by relying on the necessary instruments and expertise) and aimed at forecasting a single baseline scenario (the one under which the budget objectives are to be formalised, then articulated on the organisational units and business combinations).

ENLIGHTENED

The budgetary planning process is very accurate and incorporates specialist expertise (often cross-functional) required to understand and transmit the interactions across the managerial phenomena so as to ensure a full grasp of the bank’s ongoing context. The onus is chiefly on the ability to explain the phenomena inside the bank without prejudice to the external baseline scenario, that is ‘given’ by definition.

MISSING

The planning process attempts to consider the impact of alternative scenarios as compared to the baseline scenario, however, it is implemented on the basis of imprecise or incomplete modelling, in that developed without the analytical foundations and instruments required to appraise the consistency and the degree of likelihood of these scenarios, useful tools to sustain such a serious consideration. The focus remains on the comparison across the results produced under diverse conditions, while taking into account the approximations used.

Click here to access Prometeia’s white paper

Is Your Company Ready for Artificial Intelligence?

Overview

Companies are rushing to invest in and pursue initiatives that use artificial intelligence (AI). Some hope to find opportunity to transform their business processes and gain competitive advantage and others are concerned about falling behind the technology curve. But the reality is that many AI initiatives don’t work as planned, largely because companies are not ready for AI.

However, it is possible to leverage AI to create real business value. The key to AI success is ensuring the organization is ready by having the basics in place, particularly structured analytics and automation. Other elements of AI readiness include

  • executive engagement and support,
  • data excellence,
  • organizational capabilities,
  • and completion of AI pilots.

Key Takeaways

There is tremendous AI hype and investment. Artificial intelligence is software that can make decisions without explicit instructions for each scenario, including an ability to learn and improve over time. The term “machine learning” is often used interchangeably with AI, but machine learning is just one approach to AI, though it is currently the approach generating the most attention. Today in most business situations where AI is relevant, machine learning is likely to be employed.

The hype around AI is tremendous and has accelerated in the last few years. It is rare to read a business-related article these days that doesn’t mention AI.

The AI hype is being accompanied by massive investments from corporations (like Amazon, Google, and Uber), as well as from venture capital firms.

Because organizations often pursue AI without fully understanding it or having the basics in place, many AI initiatives fail. The AI fervor is causing companies to hurriedly pursue AI. There is a rush to capitalize on AI, but significant frustration when it comes to actually delivering AI success. AI initiatives are often pursued for the wrong reasons and many AI initiatives experience pitfalls. Some key pitfalls are:

  • Expensive partnerships between large companies and startups without results.
  • Impenetrable black box systems.
  • Open source toolkits without programmers to code.

The root cause for these failures often boils down to companies confusing three different topics:

  • automation,
  • structured analytics,
  • and artificial intelligence.

AI1

Despite the challenges, some organizations are experiencing success with AI. While the hype around AI is overblown, there are organizations having success by leveraging AI to create business value, particularly when AI is used for customer support and in the back office.

The key to AI success is first having the basics in place. In assessing AI successes and failures, the presenters drew three conclusions:

  1. There is a huge benefit from first getting the basics right: automation and structured analytics are prerequisites to AI.
  2. The benefits from AI are greater once these basics have been done right.
  3. Organizations are capable of working with AI at scale only when the basics have been done at scale.

GETTING THE BASICS RIGHT

The most important basics for AI are automation and structured analytics.

  • Automation: In most businesses there are many examples of data processes that can be automated. In many of these examples, there is no point having advanced AI if the basics are not yet automated.
  • Structured analytics means applying standard statistical techniques to well-structured data. In most companies there is huge value in getting automation and structured analytics right before getting to more complicated AI.

Examples of how businesses use structured analytics and automation include:

  • Competitor price checking. A retailer created real-time pricing intelligence by automatically scraping prices from competitors’ websites.
  • Small business cash flow lending product. Recognizing the need for small business customers to acquire loans in days, not weeks, a bank created an online lending product built on structured analytics.

BENEFITS WHEN THE BASICS ARE IN PLACE

Once the basics of structured analytics and automation are in place, organizations see more value from AI—when AI is used in specific situations.

AI2

Examples of how adding AI on top of the basics helps improve business results are:

  • New product assortment decisions. Adding AI on top of structured analytics allowed a retailer to predict the performance of new products for which there was no historic data. With this information, the retailer was able to decide whether or not to add the product to the stores.
  • Promotions forecasting. A retailer was able to improve forecasting of promotional sales using AI. Within two months of implementation, machine learning was better than the old forecasts plus the corrections made by the human forecasting team.
  • Customer churn predictions. A telephone company used AI and structured analytics to identify how to keep at-risk customers from leaving.
  • Defect detection. An aerospace manufacturer used AI to supplement human inspection and improve defect detection.

AI AT SCALE AFTER THE BASICS ARE AT SCALE

Once an organization proves it can work with automation and structured analytics at scale, it is ready for AI at scale. Readiness for AI at scale goes beyond completing a few AI pilots in defined but isolated areas of capability; the basics need to be in use across the business.

Before undertaking AI, organizations need to assess their AI readiness. To be successful, organizations need to be ready for AI. Readiness consists of multiple elements, including

  • executive engagement and support,
  • data excellence,
  • organizational capabilities,
  • and an analytical orientation.

Organizations often struggle with data excellence and organizational capabilities.

AI3

Click here to access HBR and SAS article collection

2018 EIOPA Insurance Stress Test report

Executive Summary

  1. The 2018 insurance stress test is the fourth European-wide exercise initiated and coordinated by EIOPA. As in previous exercises, the main objective is to assess the resilience of the European insurance sector to specific adverse scenarios with potential negative implications for the stability of the European financial markets and the real economy. Hence, it cannot be considered as a pass-or-fail or capital exercise for the participating groups. In total 42 (re)insurance groups, representing a market coverage of around 75% based on total consolidated assets, participated. As this exercise is based on group level information, no country results are provided in the report.
  2. The exercise tests the impact of a prolonged low yield environment (Yield Curve Down – YCD – scenario) as well as of a sudden reversal of risk premia (Yield Curve Up – YCU – scenario), which are currently identified as key risks across financial sectors. In the YCD scenario, market shocks are complemented by a longevity shock. In the YCU scenario, market shocks are combined with an instantaneous shock to lapse rates and claims inflation. The market shocks prescribed in the YCD and YCU scenarios are severe but plausible and were developed in cooperation with the ESRB, based on past market observations. Additionally, a natural catastrophe (NC) scenario tests the resilience of insurers to a potential materialisation of a set of catastrophe losses over Europe.
  3. Groups were requested to calculate their post-stress financial position by applying the same models used for their regular Solvency II reporting. The use of LTG and transitional measures was taken into account and the impact of these measures had to be reported separately. Restrictions were prescribed in order to accommodate for the instantaneous nature of the shocks and the static balance sheet approach. In particular, the impact of the transitional measure on technical provisions was held constant in the post-stress situation and potential management actions to mitigate the impact of the scenarios were not allowed.
  4. The novelty of this year’s exercise is the assessment of the post-stress capital position of the participants, with an estimate of the post-stress Solvency Capital Requirement (SCR). Given the operational and methodological challenges related to the recalculation of the group SCR, participating groups were allowed to use approximations and simplifications as long as a fair reflection of the direction and magnitude of the impact was warranted.
  5. In the pre-stress (baseline) situation, participating groups have an aggregate assets over liabilities (AoL) ratio of 109.5% (the ratio ranges from 103.0% to 139.5% for participating groups). Overall, the participating groups are adequately capitalised with an aggregate baseline SCR ratio of 202.4%, indicating that they hold approximately twice as much capital than what is required by regulation.
  6. In the YCU scenario, the aggregate AoL ratio drops from 109.5% to 107.6%, corresponding to a drop of 32.2% in the excess of assets over liabilities (eAoL). Without the use of LTG and transitional measures the impact would be more severe, corresponding to a drop in AoL ratio to 105.1% (53.1% in the eAoL) with 3 groups reporting an AoL ratio below 100% (accounting for approximately 10% of total assets in the sample). The impact of the YCU scenario is driven by a significant drop in the value of assets (-12.8% for government bonds, -13.0% for corporate bonds and -38.5 % for equity holdings). Overall, the losses on the asset side outweigh the gains on the liability side. Technical Provisions (TP) decrease by 17.0%, attributed mainly to a decrease in life TP (-14.5%) due to the reduced portfolio (instantaneous lapse shock) and the increased discounting curve (upwards shock to the swap curves). However, an increase in TP was observed for those groups focusing mainly on non-life business. In this case, the impact of the claims inflation shock on the non-life portfolio leads to an increase in the TP, outweighing the beneficial effect of the increased discounting curve due to shorter-term liabilities.
  7. The capital position is materially affected in the YCU scenario, but the poststress aggregate SCR ratio remains at satisfactory levels of 145.2% corresponding to a drop of 57.2 percentage points. However, 6 groups report a post-stress SCR ratio below 100%. This is mainly driven by a significant decrease (-29.9%) in eligible own funds (EOF) following the shocks to the asset portfolio that are not fully compensated by the reduction of the TP, while the SCR decreases only slightly (-2.3%). LTG and transitional measures play a significant role in the post-stress capital position. Without the application of the transitional measures the aggregate SCR ratio drops by an additional 14.3 percentage points to 130.9%, while in case both LTG and transitional measures are removed, the SCR ratio drops to 86.6%, with 21 groups reporting a ratio below 100%. This finding confirms the importance of the aforementioned measures for limiting the impact of short-term market movements on the financial position of insurers, as expected by their design.
  8. In the YCD scenario, the aggregate AoL ratio decreases from 109.5% to 106.7%, corresponding to a drop in eAoL of 27.6%. Again, the impact is more severe without the use of LTG and transitional measures. The aggregate AoL ratio would drop to 104.8% in that case, corresponding to a decrease of 47.7% in eAoL, with 3 groups reporting an AoL ratio below 100% (accounting for approximately 10% of total assets in the sample). The impact of the YCD scenario can be mainly attributed to an increase in the TP on the liability side (+2.1%), driven by the increase of the life TP (+6.1%) due to the reduction of the discounting curve and the longevity shock. Total assets show a decrease (-0.8%) due to the drop in value of assets held for unit-linked contracts and equity holdings (-14.7%) which is partly offset by the increase in value of the fixed income assets (+3.1% government bonds and +2.3% corporate bonds). This scenario confirms that the European insurance industry is vulnerable to a prolonged low yield environment, also at group level.
  9. The aggregate SCR ratio in the YCD scenario drops by 64.9 percentage points, but remains at 137.4% after shock, although 7 participating groups report a ratio below 100%. The decrease in SCR ratio is driven by a material decrease in EOF (-23.5%) and a significant increase in SCR (+12.7%), both mainly due to higher technical provisions. The LTG and transitional measures partly absorb the negative impact of the prescribed shocks. Without the application of the transitional measures the SCR ratio drops to 124.1%, while excluding both LTG and transitional measures leads to an aggregate SCR ratio of 85.4%, with 20 participating groups reporting a ratio below 100%.
  10. In the NC scenario, participating groups report a drop of only 0.3 percentage points in the aggregate AoL ratio. The limited impact of the NC scenario on the participating groups is mainly due to the reinsurance treaties in place, with 55% of the losses transferred to reinsurers. The most affected participants are therefore reinsurers and those direct insurers largely involved in reinsurance activities. Furthermore, it should be noted that the losses are ceded to a limited number of counterparties, highlighting a potential concentration of risk. The high resilience of the groups to the series of natural catastrophes is confirmed by the limited decrease in aggregate eAoL (-2.7%). Without the LTG and transitional measures, the eAoL would decrease by 15.1% compared to the baseline.
  11. Overall, the stress test exercise confirms the significant sensitivity to market shocks for the European insurance sector. The groups seem to be vulnerable to not only low yields and longevity risk, but also to a sudden and abrupt reversal of risk premia combined with an instantaneous shock to lapse rates and claims inflation. The exercise further reveals potential transmission channels of the tested shocks to insurers’ balance sheets. For instance, in the YCU scenario the assumed inflation shock leads to a net increase in the liabilities of those groups more exposed to non-life business through claims inflation. Finally, both the YCD and YCU scenario have similar negative impact on post-stress SCR ratios.
  12. Further analysis of the results will be undertaken by EIOPA and by the National Competent Authorities (NCAs) to obtain a deeper understanding of the risks and vulnerabilities of the sector. Subsequently, EIOPA will issue recommendations on relevant aspects where appropriate. The responses received on the cyber risk questionnaire that are not part of this report, will be evaluated and discussed in future EIOPA publications.
  13. This exercise marks an important step in the reassessment of capital requirements under adverse scenarios and provides a valuable basis for continuous dialogue between group supervisors and the participating groups on the identified vulnerabilities. EIOPA is planning to further work on refining its stress test methodology in order to fully capture the complexity of the reassessment of capital requirements under adverse scenarios. EIOPA expects that participants use the acquired experience to foster their abilities to produce high quality data and to enhance their corresponding risk management capabilities. NCAs are expected to oversee and promote these improvements.

AoL without LTG Transition

SCR With and without LTC Transition

NC Reinsurance

Click here to access the EIOPA 2018 Insurance Stress Test Report

 

The future of compliance – How cognitive computing is transforming the banking industry

Paradigm shift in financial services regulatory compliance

The compliance landscape has changed rapidly and dramatically over the past 15 years, with the volume and complexity of new regulations rising unabated. Financial institutions have strained to keep pace with the onslaught of legislative and regulatory changes that arose in response to improper business practices and criminal activity. These changes caused the erosion of public confidence in global credit and financial markets and in the security of our banking system.

After the financial crisis of 2008, there was a sharp increase in enforcement actions brought by federal and state regulators in a broad range of cases involving financial and securities fraud, economic sanctions violations, money laundering, bribery, corruption, market manipulation, and tax evasion, leading to violations of the Bank Secrecy Act and OFAC sanctions1 According to Forbes, Inc., aggregate fines paid by the largest global banks from 2008 through August 2014 exceeded USD 250 billion. A February 2016 report issued by Bloomberg revealed that the toll on foreign banks since the 2008 crisis has been colossal with 100,000 jobs lost, USD 63 billion in fines and penalties, and a staggering USD 420 billion dollar loss in market capitalization.

In the wake of these enforcement actions and record-breaking penalties, financial institutions are under pressure to

  • rethink,
  • restructure,
  • and retool

their risk and compliance function to operate in the current environment. With regulators, investors and boards demanding increased global transparency, risk and compliance can no longer be tackled in geographical silos. Transforming the way compliance departments operate to meet the new reality requires an investment in talent and technology.

Spending on talent continues to rise as institutions hire more and more staff to shore up already sizeable compliance teams. At the end of 2014, Citigroup reported a compliance staff of 30,000. Some boards, analysts, and investors question the exploding costs of compliance yet recognize that any effort to reduce staff without demonstrable and measureable improvements in compliance processes and technology would almost certainly be viewed negatively by regulators. Headcount alone cannot solve today’s compliance challenges. One possible solution lies in transformative technology that enables a shift in the focus of compliance staff from that of information gatherers to information analyzers. In other words, it is time for a paradigm shift in the financial services industry and the way regulatory compliance departments operate.

Cognitive computing for compliance

Cognitive systems are trained by humans and learn as they ingest and interpret new information. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. IBM® Watson® technology represents a new era in computing called cognitive computing, where systems understand the world in a way more similar to humans: through

  • senses,
  • learning
  • and experience.

Watson

  • uses natural language processing to analyze structured and unstructured data,
  • uses natural language processing to understand grammar and context,
  • understands complex questions
  • and proposes evidence-based answers,

based on supporting evidence and the quality of information found.

Cognitive computing is a natural fit for the regulatory compliance space because it can be used to accomplish the significant amount of analysis required to read and interpret regulations. The traditional process of distilling regulations into distinct requirements is a demanding and continuous undertaking. Compliance professionals must read hundreds of regulatory documents and determine which of the thousands of lines of text constitute true requirements. Given the same document to assess, different staff can arrive at different conclusions. In a manual environment, this adds another layer of issues to track while the parties resolve whether the identified text is or is not a requirement.

This work is usually performed on a continuous cycle and under the pressure of deadlines. The end-to-end process of identifying and finalizing the requirements inventory can be demanding and tedious. It is also traditionally encumbered by the heavy use of spreadsheets for tracking of regulations, requirements, internal decisions and statuses. Together, these conditions have the potential to negatively impact the work environment and can result in low morale and high turnover. Only when the human effort can shift from the tedium of manual processes (collect regulations, identify requirements, and track compliance issues through spreadsheets) to an automated solution will end-to-end visibility and transparency be realized. Cognitive computing technology can help an institution realign its approach from outdated information processing techniques to a state-of-the-art solution that enables this transformation.

IBM Watson Regulatory Compliance puts the power of cognitive computing into the hands of compliance professionals, giving them the capabilities needed to leverage data to help them manage risk and compliance requirements, and optimize data for more effective analysis. It is specifically tailored for compliance departments and offers, or in the future may offer, core functionalities that include:

  • Document ingestion
  • Requirements parsing and identification
  • Requirements decisioning and management
  • Categorization of requirements
  • Mapping of controls to requirements
  • Harmonization of risk frameworks
  • Interactive reporting and analytics
  • Automated audit trail
  • Automated requirements catalog
  • Centralized document library

Watson Regulatory Compliance is designed to help organizations use cognitive technology to transform key portions of their regulatory compliance processes that are traditionally performed manually.

IBM Cognitive

These enhancements, enabled by Watson, can potentially help an organization to reallocate resources to more value-added compliance and analytic activities for improved transparency across the compliance function.

A conceptual end-to-end approach for cognitive compliance and requirement management, to categorization, mapping of controls and standards, and analytics and reporting is presented in the following figure.

IBM Cognitive 2

Click here to access IBM’s White Paper

 

Successful risk management today may start with governance, risk and compliance (GRC)—but it shouldn’t end there

As more and more organizations embrace digital transformation, business risk grows in scope and complexity, and the need to manage it in a more agile, responsive manner becomes increasingly pressing.

GRC in its initial incarnation—a set of tools for managing compliance risk— remains valuable for that specific challenge, but it aligns less precisely with today’s evolving definitions of risk and risk management. The answer is not to abandon GRC, though; rather, it’s to allow it to evolve into an approach that is better suited to today’s multifaceted challenges: integrated risk management. This paper maps out the path from a pre-digital, compliance-driven riskmanagement strategy to an adaptable, integrated approach that can keep pace with the fast-changing digital world.

STARTING POINT: RECOGNIZING NEW RISKS

GRC emerged early in this century as a way of improving corporate governance and internal controls to address regulatory compliance requirements. Today, however, the need has evolved from better managing compliance risk to better managing overall risk. And the definition and scope of risk itself has evolved as well, with areas such as digital third-party risk coming into play and moving to the forefront. Strategies that drive business success today, such as technology adoption or market expansion, are creating new opportunities—but at the same time, they are introducing more risk. Consider these examples:

DIGITAL TRANSFORMATION

Digital transformation is clearly a strategic priority today; IDC recently forecast spending in this area to reach $1.3 trillion in 2018. Digital transformation creates new opportunities to thrive and compete—but it also creates digital risk. Digital business typically involves fast-moving projects supported by processes that require a multitude of different applications, expanding the points of risk and the stakes for the organization. The key to seizing the opportunities is managing the risk in critical areas:

  • VENDOR AND OTHER THIRD-PARTY RELATIONSHIPS: Looking to move more quickly and nimbly to exploit business opportunities, organizations are increasingly relying on external parties, such as service providers (especially cloud service providers), vendors, contractors and consultants. This increases risk, since organizations don’t have direct control over the risk a third party creates—but they are nevertheless responsible for managing the risk in third-party relationships.
  • COMPLIANCE AND OVERSIGHT: That brings us to the area that originally led to the emergence of GRC: compliance risk. That risk has not gone away; it’s only been joined by other risks, such as those described above. Given the increasing complexity of business and IT today, compliance has grown more complex, increasing the risk associated with it.

The examples described above represent major categories of risk for organizations today, but they are by no means the only risks organizations face. Every organization is a complex ecosystem of people, processes and technology, and risk can be hidden away in many areas.

NEXT LOGICAL STEP: AN INTEGRATED VIEW OF RISK

A HORIZONTALLY INTEGRATED VIEW
As areas of risk within organizations continue to grow beyond just compliance risk, the need to view them as an integrated whole becomes increasingly clear. There are two primary reasons for this.

  • One is that it’s simply unrealistic and operationally unsustainable to manage them separately, using different risk management platforms.
  • The other reason—far more critical than the first—is that most areas of organizational risk today don’t really exist independent of other risks; rather, they cross over into other areas.

For example, if engaging with a cloud service provider presents a security risk, that’s both a digital risk and a third-party risk. And if that risk isn’t addressed, it may result in issues across multiple areas, from business disruption to compliance. Therefore, organizations need to be able to leverage business processes to build an integrated picture of risk that crosses operational functions and fosters a multidisciplinary approach to risk management. Think of this as a horizontally integrated view of risks that needs to be managed.

AND A VERTICALLY INTEGRATED VIEW
A horizontally integrated view is important—but incomplete. The other part of the picture is a vertically integrated view that connects strategic and operational risk. In the early days of GRC, independent functions were focused more on operational risks with less emphasis on connecting to the strategic business impact. Business and IT were essentially separate functional parts of an organization and there was little connection between these two worlds. That changed as enterprise GRC became a requirement of risk management.

Today, however, when business and technology are intimately connected (or at the very least, mutually influential), risk management must link operational risks to business strategies and vice versa. Security events are a great example. At RSA, we talk about Business-Driven Security™, which puts security-related IT incidents in a business context and makes it possible to calculate the business impact of a security event—and vice versa. This kind of interrelationship allows organizations to bridge the gap between security teams and their business counterparts, creating an environment in which they can reduce the risk that security incidents will negatively affect the business or that business decisions will negatively affect IT. The interrelationships between strategic business goals and operational events are becoming increasingly impactful.

  • A decision made at the strategic level will cascade down and affect the organization’s ability to manage a risk in operations;
  • a seemingly minor operational event can spiral out of control and impact strategic direction.

Thus, connecting the top-to-bottom, strategic-to- operational view of risk—as illustrated in the accompanying graphic—is essential to truly understanding, and addressing, the obstacles to achieving business objectives.

GRC

Click here to access RSA’s White Paper

Overcome Digital Transformation Distress

Digital has become one of the most over-loaded words in the English language, meaning very different things in different contexts. Insurers have been digital since the first policy was recorded on magnetic drum or tape in the 1960s. Oddly enough, insurers now lag considerably behind other industries in their digital maturity and stage of adoption.

Why insurers lag in digital strategy

So many factors go into understanding why insurers trail in developing and implementing a modern digital strategy. At this point, most insurers have developed a digital footprint and deliver varying levels of engagement with their customers and partners, including some direct access to policy information and service. The transactional nature of some Personal and Commercial (P&C) lines make this process more straightforward. However, for Life, Accident and Health (LA&H) carriers, especially those providing Group Employee Benefits, it’s a more complex problem with additional parties involved and customization of product and service at the plan level, requiring more detailed policy information and flexibility requirements in service options. Combined with the legacy technology platforms most carriers still employ, this makes direct self-service options more difficult to implement requiring more manual intervention which ultimately erodes customer satisfaction. Ironically, the prevalent underlying key stumbling block to implementing a next generation digital strategy is insurers’ digital legacy.

Digital Transformation Distress

According to McKinsey, Insurtechs are focusing more on P&C than LA&H but there is significant activity in distribution and new business-related activities, which falls squarely in the digital arena. In a recent multi-country study by Couchbase across multiple industries including insurance,

  • 64% of respondents say if they can’t keep up with digital innovation they will go out of business or be absorbed;
  • 95% say digital transformation seems an insurmountable task and
  • 83% felt they would face being fired if such a project failed.

Despite the challenges, LA&H insurers are putting more comprehensive digital strategies into place and technology vendors servicing this market must think beyond providing basic digital engagement capabilities to supporting a more complete vision of digitally-enabled business.

Digital Enagement and Flexibility

Leading SaaS core insurance system providers believe insurance business leaders need a platform that can provide a level of digital engagement and service equal to their customers’ expectations for all service providers. To enable this, there must be an underlying OpenCore system that can ensure accurate, open and flexible product development, deployment and service to serve a rapidly changing market.

  • Digital Engagement is a critical element of a complete strategy and the most visible. In the Group and Employee Benefits market, there are multiple stakeholders in the value chain with differing roles and digital engagement should be role-based, whether it is transactional or purely informational.
  • Flexibility is required within the business model. The chain of carrier(s), brokers, benefit administration companies (ben admin), enrollment vendors, employers, and employees must provide rapid and accurate straight through processing and be flexible enough to change out any given player in the chain, based on the deal.

Legacy systems are proving inadequate

The traditional approach to support these two key needs of the value chain is

  • either to provide an end-to-end portal solution driven from the core system architecture
  • or a standardized data feed interface between the core system and the next link in the value chain.

The problem with these two approaches is that they are inadequate. Why?

The first approach of end-to-end portal solution is not feasible given current and future insurance market directions around multi-carrier plans and value-added services from benefit admin providers. The standardized data feed interface can work but invariably leads to a great deal of custom IT interface work, even when employing industry standards like the emerging LIMRA-backed Workplace Benefits standard. This proves especially difficult when there are broad systems of engagement in play from companies like Salesforce.com that are used in call centers and broad community portals.

An Engagement Model that Works

Leading insurance technology vendors are proving that OpenCore is the best approach applying a role-based scenario to defining digital engagement requirements for the core system. This tactic provides a layered architecture to suit those roles and the engagement path needed for the particular customer. The way that might evolve could include a large carrier that uses a system of engagement for their customer service reps (CSR) and works with a broker, enrollment vendor and larger employer in the following scenarios:

  • The insurance specialist who installs and manages the details of a case works directly with the core system interface, designed for experts.
  • The CSR who works for the carrier and answers basic questions about the case for the employer or employee and who interacts with the system of engagement, which is tied to the core system in real-time via an app written by the core system vendor specifically for that platform.
  • The broker who does case and member inquiries and updates through a broker portal provided by the carrier with role-based access into the core system.
  • The enrollment vendor uses industry standard real time APIs and batch file transfers to exchange data directly with the carrier’s core system. The larger employer exchanges transactions through API or data feed to the HCM system and has direct access to the carrier’s core system through a role-based portal designed for the exchange process.
  • The employee has access to the employers Human Capital Management (HCM) employee portal and the option to go directly to the carrier for deeper interactions such as claims or absences, or portability issues. The interaction with the carrier is via portal, mobile, voice or SMS depending on the employee’s preference or circumstance.

Insurance technology companies that provide a layered digital engagement architecture, with core systems capabilities supporting role-based APIs sets that support both digital engagement applications and are available for customers and partner DIY projects, enables the insurer to achieve the most flexible, stable and modern digital experience.

OpenCore

Click here to access Fineos’ White Paper

EIOPA: Peer review assessing how National Competent Authorities (NCAs) supervise and determine whether an insurer’s set­ting of key functions fulfils the legal requirements of Solvency II

The main task of the European Insurance and Occupational Pensions Authority (EIOPA) is to

  • enhance supervisory convergence,
  • strengthen consumer protection
  • and preserve financial stability.

In the context of enhancing supervisory convergence and in accordance with its mandate, EIOPA regularly conducts peer reviews, working closely with national competent authorities (NCAs), with the aim of strengthening both the convergence of supervisory practices across Europe and the capacity of NCAs to conduct high-quality and effective supervision.

In line with its mandate, the outcome of peer reviews, including identified best practices, are to be made public with the agreement of the NCAs that have been subject to the review.

BACKGROUND AND OBJECTIVES

Enhancing the governance system of insurers is one of the major goals of Solvency II (SII). The four key functions (risk management, actuarial, compliance and internal audit) as required under the SII regulation are an essential part of the system of governance. These key functions are expected to be operationally independent to ensure an effective and robust internal control environment within an insurer and support high quality of decision making by the management. At the same time it is also important that these governance requirements are not overly burdensome for small and medium-sized insurers. Therefore SII allows NCAs to apply the principle of proportionality in relation to compliance with key function holder requirements for those insurers.

Under SII, insurers may combine key functions in one holder. However, such combinations have to be justified by the principle of proportionality and insurers need to properly address the underlying conflicts of interest. Holding a key function should generally not be combined with administrative, management or supervisory body (AMSB) membership or with operational tasks because of their controlling objective. Thus, these combinations should rather occur in exceptional cases, taking into account a risk-based approach and the manner in which the insurer avoids and manages any potential conflict of interest.

This peer review assesses how NCAs supervise and determine whether an insurer’s setting of key functions fulfils the legal requirements of SII with a particular emphasis on proportionality. The peer review examines practices regarding:

  • combining key functions under one holder;
  • combining key functions with AMSB membership or with carrying out operational tasks;
  • subordination of one key function under another key function;
  • split of one key function among several holders;
  • assessment of the fitness of key function holders; and
  • outsourcing of key functions.

The period examined under the scope of this peer review was 2016 but also covered supervisory practices executed before 2016 in the preparatory stage of SII. The peer review was conducted among NCAs from the European Economic Area (EEA) on the basis of EIOPA’s Methodology for conducting Peer Reviews (Methodology).

Detailed information was gathered in the course of the review. All NCAs completed an initial questionnaire. This was followed by fieldwork comprising visits to 8 NCAs and 30 conference calls.

MAIN FINDINGS

The review showed that NCAs in general apply the principle of proportionality and that they have adopted similar approaches.

SUMMARY RESULTS OF THE COMPARATIVE ANALYSIS

  • Supervisory framework: Approximately half of NCAs use written supervisory guidance for the application of the principle of proportionality. Larger NCAs in particular use written supervisory guidance in order to ensure consistency of their supervisory practice among their supervisory staff.
  • Approach of NCAs: Most NCAs have a similar approach. NCAs assess the insurers’ choice of key function holders at the time of initial notification regarding the key function holder’s appointment. If any concerns are noted at this stage, for example regarding combinations or fitness, NCAs generally challenge and discuss these issues with the insurer, rather than issuing formal administrative decisions.
  • Combining key functions in one holder: This occurs in almost all countries. The most frequent combinations are between risk management and actuarial functions and between risk management and compliance functions. Combinations are most commonly used by smaller insurers but are also seen in large insurers. EIOPA has identified the need to draw the attention of NCAs to the need to challenge combinations more strongly, especially when they occur in bigger, more complex insurers, and to ensure that adequate mitigation measures are in place to warrant a robust system of governance.
  • Holding the internal audit function and other key functions: The combination of the internal audit function with other key functions occurs in 15 countries, although the frequency of such combinations is relatively low. Moreover, there were cases of the internal audit function holder also carrying out operational tasks which could lead to conflicts of interest and compromise the operational independence of the internal audit function. It is important to emphasise that the legal exemption of Article 271 of the Commission Delegated Regulation EU (2015/35) does not apply to the combination with operational tasks.
  • Combining a key function holder with AMSB membership: Most NCAs follow a similar and comprehensive approach regarding the combination of key function holder and AMSB member. In this regard, NCAs accept such cases only if deemed justified under the principle of proportionality. This peer review shows that two NCAs request or support combinations of AMSB member and the risk management function holder regardless of the principle of proportionality in order to strengthen the knowledge and expertise regarding risk management within the AMSB.
  • Combining key function holders (excluding internal audit function holder) with operational tasks: In nearly all countries combinations of risk management, actuarial and compliance key function holders with operational tasks occur, but such combinations generally occur rarely or occasionally. However, several NCAs do not have a full market overview of such combinations with operative tasks. Adequate mitigating measures are essential to reduce potential conflicts of interest when key function holders also carry out operational tasks. The most common combinations are the compliance function holder with legal director and the risk management function holder with finance director.
  • Splitting a key function between two holders: About half of the NCAs reported cases where more than one individual is responsible for a particular key function (‘split of key function holder’). The most common split concerns the actuarial function (split between life and non-life business). NCAs should monitor such splits in order to maintain appropriate responsibility and accountability among key function holders.
  • Subordination of a key function holder to another key function holder or head of operational department: This is observed in half of the countries reviewed. An organisational subordination can be accepted, but there needs to be a direct ‘unfiltered’ reporting line from the subordinated key function holder to the AMSB. In cases of subordination, conflicts of interest have to be mitigated and operational independence needs to be ensured including the mitigating measures concerning the remuneration of the subordinated key function holders.
  • Fitness of key function holders: Most NCAs assess the fitness of the key function holder at the time of initial notification and apply the principle of proportionality. Several NCAs did not systematically assess the key function holders appointed before 2016. These NCAs are advised to do so using a risk-based approach.
  • Outsourcing of key function holders: Most NCAs have observed outsourcing of key function holders. According to the proportionality principle, an AMSB member may also be a designated person responsible for overseeing and monitoring the outsourced key function. Eight NCAs make a distinction between intra-group and extra-group outsourcing and six NCAs do not require a designated person in all cases, which may give rise to operational risks.

BEST PRACTICES

Through this peer review, EIOPA identified four best practices.

  • When NCAs adopt a structured proportionate approach based on the nature, scale and complexity of the business of the insurer regarding their supervisory assessment of key function holders and combination of key function holders at the time of initial notification and on an ongoing basis. The best practice also includes supervisory documentation and consistent and uniform data submission requirements (for example an electronic data submission system for key function holder notification). This best practice has been identified in Ireland and the United Kingdom.
  • When an NCA has a supervisory panel set up internally which discusses and advises supervisors about complex issues regarding the application of the proportionality principle in governance requirements regarding key functions. This best practice has been identified in the Netherlands.
  • When assessing the combination of key function holder with AMSB member, EIOPA considers the following as best practice for NCAs:
    • To publicly disclose the NCA’s expectations that controlling key functions should generally not be combined with operational functions for example with the membership of the AMSB. Where those cases occur, NCAs should clearly communicate their expectation that the undertaking ensures that it is aware of possible conflicts of interest arising from such a combination and manages them effectively.
    • To require from insurers that main responsibilities as a member of the AMSB do not lead to a conflict of interest with the tasks as a key function holder.
    • To assess whether the other AMSB members challenge the key function holder also being an AMSB member.

This best practice has been identified in Lithuania.

  • When NCAs apply a risk-based approach for the ongoing supervision that gives the possibility to ensure the fulfilment of fitness requirements of KFHs at all times by holding meetings with key function holders on a regular scheduled basis as part of an NCA’swork plan (annual review plan). The topics for discussion for those meetings can vary, depending for example on actual events and current topics. This best practice has been identified in Ireland and the United Kingdom.

These best practices provide guidance for a more systematic approach regarding the application of the principle of proportionality as well as for ensuring consistent and effective supervisory practice within NCAs.

EIOPA NCA KFH

Click here to access EIOPA’s full report on its Peer Review