Summary Of Findings Overview of big tech’s activities in quantum Big tech’s quantum activity is ramping up quickly
Google, Microsoft, Amazon, IBM, and Intel are all developing their own quantum computing hardware. Big tech companies have been behind several breakthroughs in the space.
In July 2021, Microsoft’s venture arm took part in a $450M round to PsiQuantum—the most well-funded quantum computing startup in the world.
Cloud is a key area of quantum competition for big tech
Google, Microsoft, Amazon, and IBM have all launched quantum computing services on their cloud platforms.
Startups have partnered with big tech companies to offer remote access to a broad range of quantum computers.
Big tech forges ahead with quantum advances. Google, Microsoft, Amazon, IBM, and Intel all have ambitious quantum roadmaps.
Expect rising qubit counts and more frequent demonstrations of commercial applications.
Watch for quantum computing to become a hot geopolitical issue, especially for US-China relations.
Expect big tech companies, including China-based Baidu and Alibaba, to be drawn deeper into political debates.
In the US, government efforts to rein in big tech could be countered by officials nervous about keeping up with countries racing ahead with quantum technology.
Other big tech players could join the fray.
Facebook and Apple have not announced quantum tech initiatives, but both will be monitoring the space and have business lines that could benefit from quantum computing.
THEME #1: GOOGLE IS BUILDING CUTTING-EDGE QUANTUM TECHNOLOGY
Alphabet has a software-focused quantum team called Sandbox that is dedicated to applying quantum technology to near-term enterprise use cases. Sandbox operates mostly in stealth mode; however, recent job postings and past comments from its leadership indicate that its work includes:
Quantum sensors —There are hints that Sandbox is working on a hypersensitive magnetism-based diagnostic imaging platform, possibly an MEG system for reading brain activity, that combines quantum-based sensitivity gains (tens of thousands of times more sensitive than typical approaches) with quantum machine learning to disentangle a signal from background noise to boost sensitivity. This could allow for more precise scans or for cheaper, more flexible deployments of magnetic-based imaging devices for use beyond hospital settings, as well as improved access in lower-income countries.
Post-quantum cryptography (PQC)—Quantum computers threaten much of the encryption used on the internet. Post-quantum cryptography will defend against this. Expect Sandbox’s work to be focused on helping enterprises transition to PQC and making Alphabet’s sprawling online services quantum-safe.
Distributed computing —This tech allows computers to coordinate processing power and work together on problems. Sandbox’s work here may focus on integrating near-term quantum computers into distributed computing networks to boost overall capabilities. Another approach would be to use quantum optimization algorithms to help manage distributed networks more efficiently.
THEME #2: GOOGLE HAS MADE SCIENTIFIC BREAKTHROUGHS
THEME #3: GOOGLE COULD BENEFIT FROM A QUANTUM AI RIPPLE EFFECT
THEME #1: MICROSOFT IS POSITIONING ITSELF AS AN EARLY QUANTUM CLOUD LEADER
THEME #2: MICROSOFT WANTS ITS OWN QUANTUM HARDWARE
THEME #3: MICROSOFT IS A POST-QUANTUM CRYPTOGRAPHY PIONEER
THEME #1: AMAZON SEES QUANTUM COMPUTERS AS KEY TO THE FUTURE OF AWS
THEME #2: AMAZON IS DEVELOPING ITS OWN QUANTUM HARDWARE
THEME #3: AMAZON’S CURRENT BUSINESS LINES COULD BE GIVEN A BIG BOOST BY QUANTUM COMPUTERS
THEME #1: IBM IS GOING AFTER THE FULL QUANTUM COMPUTING STACK
THEME #2: IBM POSITIONS ITSELF AS THE ESSENTIAL QUANTUM COMPUTING PARTNER FOR ENTERPRISES
During the 2020 review of Solvency II EIOPA identified several divergent practices regarding the valuation of best estimate, as presented in the analysis background document to EIOPA’s Opinion on the 2020 review of Solvency II. Divergent practices require additional guidance to ensure a convergent application of the existing regulation on best estimate valuation.
In accordance with Article 16 of Regulation (EU) No 1094/20102 EIOPA issues these revised Guidelines to provide guidance on how insurance and reinsurance undertakings should apply the requirements of Directive 2009/138/EC3 (“Solvency II Directive”) and in Commission Delegated Regulation (EU) No 2015/354 (“Delegated Regulation”), on best estimate valuation.
This revision introduces new Guidelines and amends current Guidelines on topics that are relevant for the valuation of best estimate, including
the use of future management actions and expert judgment,
the modelling of expenses and the valuation of options and guarantees by economic scenarios generators
and modelling of policyholder behaviour.
EIOPA also identified the need for clarification in the calculation of expected profits in future premiums (EPIFP).
The revised Guidelines apply to both individual undertakings and mutatis mutandis at the level of the group. These revised Guidelines should be read in conjunction with and without prejudice to the Solvency II Directive, the Delegated Regulation and EIOPA’s Guidelines on the valuation of technical provisions. Unless otherwise stated in this document, the current guidelines of EIOPA’s Guidelines on the valuation of technical provisions remain unchanged and continue to be applicable.
If not defined in these revised Guidelines, the terms have the meaning defined in the Solvency II Directive. These revised Guidelines shall apply from 01-01-2023.
NEW: GUIDELINE 0 – PROPORTIONALITY 3.1. Insurance and reinsurance undertakings should apply the Guidelines on valuation of technical provisions in a manner that is proportionate to the nature, scale and complexity of the risks inherent in their business. This should not result in a material deviation of the value of the technical provisions from the current amount that insurance and reinsurance undertakings would have to pay if they were to transfer their insurance and reinsurance obligations immediately to another insurance or reinsurance undertaking.
NEW: GUIDELINE 24A – MATERIALITY IN ASSUMPTIONS SETTING 3.6. Insurance and reinsurance undertakings should set assumptions and use expert judgment, in particular taking into account the materiality of the impact of the use of assumptions with respect to the following Guidelines on assumption setting and expert judgement. 3.7. Insurance and reinsurance undertakings should assess materiality taking into account both quantitative and qualitative indicators and taking into consideration binary events, extreme events, and events that are not present in historical data. Insurance and reinsurance undertakings should overall evaluate the indicators considered.
NEW: GUIDELINE 24B – GOVERNANCE OF ASSUMPTIONS SETTING 3.11. Insurance and reinsurance undertakings should ensure that all assumption setting and the use of expert judgement in particular, follows a validated and documented process. 3.12. Insurance and reinsurance undertakings should ensure that the assumptions are derived and used consistently over time and across the insurance or reinsurance undertaking and that they are fit for their intended use. 3.13. Insurance and reinsurance undertakings should approve the assumptions at levels of sufficient seniority according to their materiality, for most material assumptions up to and including the administrative, management or supervisory body.
NEW: GUIDELINE 24C – COMMUNICATION AND UNCERTAINTY IN ASSUMPTIONS SETTING 3.14. Insurance and reinsurance undertakings should ensure that the processes around assumptions, and in particular around the use of expert judgement in choosing those assumptions, specifically attempt to mitigate the risk of misunderstanding or miscommunication between all different roles related to such assumptions. 3.15. Insurance and reinsurance undertakings should establish a formal and documented feedback process between the providers and the users of material expert judgement and of the resulting assumptions. 3.16. Insurance and reinsurance undertakings should make transparent the uncertainty of the assumptions as well as the associated variation in final results.
NEW: GUIDELINE 24D – DOCUMENTATION OF ASSUMPTIONS SETTING 3.17. Insurance and reinsurance undertakings should document the assumption setting process and, in particular, the use of expert judgement, in such a manner that the process is transparent. 3.18. Insurance and reinsurance undertakings should include in the documentation
the resulting assumptions and their materiality,
the experts involved,
the intended use
and the period of validity.
3.19. Insurance and reinsurance undertakings should include the rationale for the opinion, including the information basis used, with the level of detail necessary to make transparent both the assumptions and the process and decision-making criteria used for the selection of the assumptions and disregarding other alternatives. 3.20. Insurance and reinsurance undertakings should make sure that users of material assumptions receive clear and comprehensive written information about those assumptions.
NEW: GUIDELINE 24E – VALIDATION OF ASSUMPTIONS SETTING 3.21. Insurance and reinsurance undertakings should ensure that the process for choosing assumptions and using expert judgement is validated. 3.22. Insurance and reinsurance undertakings should ensure that the process and the tools for validating the assumptions and in particular the use of expert judgement are documented. 3.23. Insurance and reinsurance undertakings should track the changes of material assumptions in response to new information, and analyse and explain those changes as well as deviations of realisations from material assumptions. 3.24. Insurance and reinsurance undertakings, where feasible and appropriate, should use validation tools such as stress testing or sensitivity testing. 3.25. Insurance and reinsurance undertakings should review the assumptions chosen, relying on independent internal or external expertise. 3.26. Insurance and reinsurance undertakings should detect the occurrence of circumstances under which the assumptions would be considered false.
AMENDED: GUIDELINE 25 – MODELLING BIOMETRIC RISK FACTORS 3.27. Insurance and reinsurance undertakings should consider whether a deterministic or a stochastic approach is proportionate to model the uncertainty of biometric risk factors. 3.28. Insurance and reinsurance undertakings should take into account the duration of the liabilities when assessing whether a method that neglects expected future changes in biometrical risk factors is proportionate, in particular in assessing the error introduced in the result by the method. 3.29. Insurance and reinsurance undertakings should ensure, when assessing whether a method that assumes that biometric risk factors are independent from any other variable is proportionate, and that the specificities of the risk factors are taken into account. For this purpose, the assessment of the level of correlation should be based on historical data and expert judgment.
NEW: GUIDELINE 28A – INVESTMENT MANAGEMENT EXPENSES 3.30. Insurance and reinsurance undertakings should include in the best estimate administrative and trading expenses associated with the investments needed to service insurance and reinsurance contracts. 3.31. In particular, for products whose terms and conditions of the contract or the regulation requires to identify the investments associated with a product (e.g. most unit linked and index linked products, products managed in ring-fenced funds and products to which matching adjustment is applied), insurance and reinsurance undertakings should consider the investments. 3.32. For other products, insurance and reinsurance undertakings should base the assessment on the characteristics of the contracts. 3.33. As a simplification, insurance and reinsurance undertakings may also consider all investment management expenses. 3.34. Reimbursements of investment management expenses that the fund manager pays to the undertaking should be taken into account as other incoming cash flows. Where these reimbursements are shared with the policyholders or other third parties, the corresponding cash out flows should also be considered.
AMENDED: GUIDELINE 30 – APORTIONMENT OF EXPENSES 3.41. Insurance and reinsurance undertakings should allocate and project expenses in a realistic and objective manner and should base the allocation of these expenses
on their long-term business strategies,
on recent analyses of the operations of the business,
on the identification of appropriate expense drivers
and on relevant expense apportionment ratios.
3.42. Without prejudice to the proportionality assessment and the first paragraph of this guideline, insurance and reinsurance undertakings should consider using, in order to allocate overhead expenses over time, the simplification outlined in Technical Annex I, when the following conditions are met:
a) the undertaking pursues annually renewable business; b) the renewals must be reputed to be new business according the boundaries of the insurance contract; c) the claims occur uniformly during the coverage period.
AMENDED: GUIDELINE 33 – CHANGES IN EXPENSES 3.47. Insurance and reinsurance undertakings should ensure that assumptions with respect to the evolution of expenses over time, including future expenses arising from commitments made on or prior to the valuation date, are appropriate and consider the nature of the expenses involved. Insurance and reinsurance undertakings should make an allowance for inflation that is consistent with the economic assumptions made and with dependency of expenses on other cash flows of the contract.
NEW: GUIDELINE 37A – DYNAMIC POLICYHOLDER BEHAVIOUR 3.53. Insurance and reinsurance undertakings should base their assumptions on the exercise rate of relevant options on:
statistical and empirical evidence, where it is representative of future conduct, and
expert judgment on sound rationale and with clear documentation.
3.54. The lack of data for extreme scenarios should not be considered alone to be a reason to avoid dynamic policyholder behaviour modelling and/or the interaction with future management actions.
NEW: GUIDELINE 37B – BIDIRECTIONAL ASSUMPTIONS 3.59. When setting the assumptions on dynamic policyholder behaviour, insurance and reinsurance undertakings should consider that the dependency on the trigger event and the exercise rate of the option is usually bidirectional, i.e. both an increase and a decrease should be considered depending on the direction of the trigger event.
NEW: GUIDELINE 37C – OPTION TO PAY ADDITIONAL OR DIFFERENT PREMIUMS 3.60. Insurance and reinsurance undertakings should model all relevant contractual options when projecting the cash flows, including the option to pay additional premiums or to vary the amount of premiums to be paid that fall within contract boundaries.
NEW: GUIDELINE 40A – COMPREHENSIVE MANAGEMENT PLAN 3.61. Insurance and reinsurance undertakings should ensure that the comprehensive future management actions plan that is approved by the administrative, management or supervisory body is either:
a single document listing all assumptions relating to future management actions used in the best estimate calculation; or
a set of documents, accompanied by an inventory, that clearly provide a complete view of all assumptions relating to future management actions used in best estimate calculation.
NEW: GUIDELINE 40B – CONSIDERATION OF NEW BUSINESS IN SETTING FUTURE MANAGEMENT ACTIONS 3.64. Insurance and reinsurance undertakings should consider the effect of new business in setting future management actions and duly consider the consequences on other related assumptions. In particular, the fact that the set of cash-flows to be projected through the application of Article 18 of the Delegated Regulation on contract boundaries is limited should not lead insurance and reinsurance undertakings to consider that assumptions only rely on this projected set of cash-flows without any influence of new business. This is particularly the case for assumptions on the allocation of risky assets, management of the duration gap or application of profit sharing mechanisms.
NEW: GUIDELINE 53A – USE OF STOCHASTIC VALUATION 3.70. Insurance and reinsurance undertakings should use stochastic modelling for the valuation of technical provisions of contracts whose cash flows depend on future events and developments, in particular those with material options and guarantees. 3.71. When assessing whether stochastic modelling is needed to adequately capture the value of options and guarantees, insurance and reinsurance undertakings should, in particular but not only, consider the following cases:
any kind of profit-sharing mechanism where the future benefits depend on the return of the assets;
financial guarantees (e.g. technical rates, even without profit sharing mechanism), in particular, but not only, where combined with options (e.g. surrender options) whose dynamic modelling would increase the present value of cash flows in some scenarios.
NEW: GUIDELINE 57A – MARKET RISK FACTORS NEEDED TO DELIVER APPROPRIATE RESULTS 3.75. When assessing whether all the relevant risk factors are modelled with respect to the provisions of Articles 22(3) and 34(5) of the Delegated Regulation, insurance and reinsurance undertakings should be able to demonstrate that their modelling adequately reflects the volatility of their assets and that the material sources of volatility are appropriately reflected (e.g. spreads and default risk). 3.76. In particular, insurance and reinsurance undertakings should use models that allow for the modelling of negative interest rates.
AMENDED: GUIDELINE 77 – ASSUMPTIONS USED TO CALCULATE EPIFP 3.78. For the purpose of calculating the technical provisions without risk margin under the assumption that the premiums relating to existing insurance and reinsurance contracts that are expected to be received in the future are not received, insurance and reinsurance undertakings should apply the same actuarial method used to calculate the technical provisions without risk margin in accordance with Article 77 of the Solvency II Directive, with the following changed assumptions:
a) policies should be treated as though they continue to be in force rather than being considered as surrendered; b) regardless of the legal or contractual terms applicable to the contract, the calculation should not include penalties, reductions or any other type of adjustment to the theoretical actuarial valuation of technical provisions without a risk margin calculated as though the policy continued to be in force.
3.79. All the other assumptions (e.g. mortality, lapses or expenses) should remain unchanged. This means that the insurance and reinsurance undertakings should apply
the same projection horizon,
future management actions
and policyholder option exercise rates used in best estimate calculation
without adjusting them to consider that future premiums will not be received. Even if all assumptions on expenses should remain constant, the level of some expenses (e.g. acquisition expenses or investment management expenses) could be indirectly affected.
NEW: GUIDELINE 77A – ALTERNATIVE APPROACH TO CALCULATE EPIFP 3.88. Insurance and reinsurance undertakings may identify EPIFP as the part of present value of future profits related to future premiums in case the outcome does not materially deviate from the value that would have resulted from the valuation described in Guideline 77. This approach may be implemented using a formula design.
EIOPA’S DIGITAL TRANSFORMATION STRATEGIC PRIORITIES AND OBJECTIVES
EIOPA’s supervisory and regulatory activities are always underpinned by two overarching objectives: promoting consumer protection and financial stability. The digital transformation strategy aims at identifying areas where, in view of these overarching objectives, EIOPA can best commit its resources in view of the challenges posed by digitalisation, while at the same time seeking to identify and remove undue barriers that limit the benefits.
This strategy sits alongside EIOPA’s other forward thinking prioritisation tools –
the union-wide strategic supervisory priorities,
the Strategy on Cyber Underwriting,
the Suptech Strategy
– but its focus is less on the specific actions needed in different areas, and more on how EIOPA will support NCAs and the pensions and insurance sectors in facing digital transformations following a
and secure approach
to financial innovation and digitalisation.
Five key long-term priorities have been identified, which will guide EIOPA’s contributions on digitalisation topics:
Leveraging on the development of a sound European data ecosystem
Preparing for an increase of Artificial Intelligence while focusing on financial inclusion
Ensuring a forward looking approach to financial stability and resilience
Realising the benefits of the European single market
Enhancing the supervisory capabilities of EIOPA and NCAs.
These five long-term priorities are described in the following sections. Each relates to areas where work is already underway or planned, whether at national or European level, by EIOPA or other European bodies.
The aim is to focus on priority areas where EIOPA can add value so as to enhance synergies and improve overall convergence and efficiency in our response as a supervisory community to the digital transformation.
LEVERAGING ON THE DEVELOPMENT OF A SOUND EUROPEAN DATA ECO-SYSTEM ACCOMPANYING THE DEVELOPMENT OF AN OPEN FINANCE AND OPEN INSURANCE FRAMEWORK Trends in the market show that the exchange of both personal and non-personal data through Application Programming Interfaces (APIs) is a leading factor leading to transformation and integration in the financial sector. By enabling several stakeholders to “plug” to an API to have access to timely and standardised data, insurance undertakings in collaboration with other service providers can timely and adequately assess the needs of consumers and develop innovative and convenient proposals for them. Indeed, there are multiple types of use cases that can be developed as a result of enhanced accessing and sharing of data in insurance.
Examples of potential use cases include pension tracking systems (see further below), public and private comparison websites, or different forms of embedding insurance (including micro insurances) in the channels of other actors (retailers, airlines, car sharing applications, etc.).
Another use case could consist in allowing consumers to conveniently access information about their insurance products from different providers in an integrated platform / application and identify any protection gaps (or overlaps) in coverage that they may have.
In addition to having access to a greater variety of products and services and enabling consumers to make more informed decisions, the transfer of insurance-related data seamlessly from one provider to another in real-time (data portability) could facilitate switching and enhance competition in the market.
Supervisory authorities could also potentially connect into the relevant APIs to access anonymised market data so as to develop more pre-emptive and evidence-based supervision and regulation.
However, it is also important to take into account relevant risks such those linked to data
ICT/cyber risks and financial inclusion risks are also relevant, as well as issues related to a level playing field and data reciprocity.
EIOPA considers that, if the risks are handled right, several open insurance use cases can have significant benefits for consumers, for the sector and its supervision and will use the findings of its recent public consultation on this topic to collaborate with the European Commission on the development of the financial data space and/or open finance initiatives respectively foreseen in the Commission’s Data Strategy and Digital Finance Strategy, possibly focusing on specific use cases.
ADVISING ON THE DEVELOPMENT OF PENSIONS DATA TRACKING SYSTEMS IN THE EU European public pension systems are facing the dual challenge of remaining financially sustainable in an aging society and being able to provide Europeans with an adequate income in retirement. Hence, the relevance of supplementary occupational and personal pension systems is increasing. The latter are also seeing a major trend influenced by the low interest environment consisting on the shift from Defined Benefit (DB) plans, which guarantee citizens a certain income after retirement, to Defined Contribution (DC) plans, where retirement income depends on how the accumulated contributions have been invested. As a consequence of these developments, more responsibility and financial risks are placed on individual citizens for planning for their income after retirement.
In this context, Pensions Tracking Systems (PTS) can provide simple and understandable information to the average citizen about his or her pension savings in an aggregated manner, typically conveniently accessible via digital channels. PTS are linked to the concept of Open Finance, since different providers of statutory and private pensions share pension data in a standardised manner so that it can be aggregated so as to provide consumers with relevant information for adopting informed decisions about their retirement planning.
EIOPA considers that it is increasingly important to provide consumers with adequate information to make informed decisions about their retirement planning, as it is reflected in EIOPA’s technical advice to the European Commission on best practices for the development of Pension Tracking Systems. EIOPA remains ready to further assist on this area, as relevant.
TRANSITIONING TOWARDS A SUSTAINABLE ECONOMY WITH THE HELP OF DATA AND TECHNOLOGY Technologies such as
or the Internet of Things
can assist European insurance undertakings and pension schemes in the implementation of more sustainable business models and investments.
For example, greater insights provided by new datasets (e.g. satellite images or images taken by drones) combined with more granular AI systems may allow to better assess climate change-related risks and provide advanced insurance coverage. Indeed, as highlighted by the Commission’s strategy on adaptation to climate change, actions aimed to adapt to climate change should be informed by more and better data on climate-related risks and losses accessible to everyone as well as relevant risks assessment tools.
This would allow insurance undertakings to contribute to a wider inclusion by incentivising customers to mitigate risks via policies whose pricing and contractual terms are based on effective measurements, e.g. with the use of telematics-based solutions in home insurance. However, there are also concerns about the impact on the affordability and availability of insurance for certain consumers (e.g. consumers living in areas highly exposed to flooding) as well as regarding the environmental impact of some technologies, notably concerning the energy consumption of certain data centres and crypto-assets.
Promoting a sustainable economy is a core priority for EIOPA. For this purpose, EIOPA will specifically develop a Sustainable Finance Action Plan highlighting, among other things, the importance of improving the accessibility and availability of data and models on climate-related risks and insured losses and the role that EIOPA can play therein, as highlighted by the Commission’s strategy on adaptation to climate change and in line with the Green deal data space foreseen in the Commission’s Data Strategy.
PREPARING FOR AN INCREASE OF ARTIFICIAL INTELLIGENCE WHILE FOCUSING ON FINANCIAL INCLUSION TOWARDS AN ETHICAL AND TRUSWORTHY ARTIFICIAL INTELLIGENCE IN THE EUROPEAN INSURANCE SECTOR The take-up of AI in all the areas of the insurance value chain raises specific opportunities and challenges; the variety of use cases is fast moving, while the technical, ethical and supervisory issues thrown up in ensuring appropriate governance, oversight, and transparency are wide ranging. Indeed, while the benefits of AI in terms of prediction accuracy, cost efficiency and automation are very relevant, the challenges raised by
the limited explainability of some AI systems
and the potential impact on some AI use cases on the fair treatment of consumers and the financial inclusion of vulnerable consumers and protected classes
is also significant.
A coordinated and coherent approach across markets, insurance undertakings and intermediaries, and between supervisors is therefore of particular importance, also given the potential costs of addressing divergences in the future. EIOPA acknowledges that AI can play a pivotal role in the digital transformation of the insurance and pension markets in the years to come and therefore the importance of establishing adequate governance frameworks to ensure ethical and trustworthy AI systems. EIOPA will seek to leverage the AI governance principles recently developed by its consultative expert group on digital ethics, to develop further sectorial work on specific AI use cases in insurance.
PROMOTING FINANCIAL INCLUSION IN THE DIGITAL AGE On the one hand, new technologies and business models could be used to improve the financial inclusion of European citizens. For example, young drivers using telematics devices installed in their cars or diabetes patients using health wearable devices reportedly have access to more affordable insurance products. In addition to the incentives arising from advanced risk-based pricing, insurance undertakings could provide consumers loss prevention / risk mitigation services (e.g. suggestions to drive safely or to adopt healthier lifestyles) to help them understand and mitigate their risk exposure.
From a different perspective, digital communication channels, new identity solutions and onboarding options could also facilitate access to insurance to certain customer segments. On the other hand, certain categories of consumers or consumers not willing to share personal data could encounter difficulties in accessing affordable insurance as a result of more granular risk assessments. This would be for instance the case of consumers having difficulties to access affordable flood insurance as a result detailed risk-based pricing enabled by satellite imagery processed by AI systems. In addition,
other groups of potentially vulnerable consumers deserve special attention due to their personal characteristics (e.g. elderly people or in poverty),
life-time events (e.g. car accident),
health conditions (e.g. undergoing therapy)
or people with difficulties to access digital services.
Furthermore, the trend towards increasingly data-driven business models can be compromised if adequate governance measures are not put in place to deal with biases in datasets used in order to avoid discriminatory outcomes.
EIOPA will assess the topic of financial inclusion from a broader perspective i.e. not only from a digitalisation angle, seeking to promote the fair and ethical treatment of consumers, in particular in front-desk applications and in insurance lines of businesses that are particularly important due to their social impact.
EIOPA will routinely assess its consumer protection supervisory and policy work in view of impacts on financial inclusion, and ensuring its work on digitalisation takes into account accessibility or inclusion impacts.
ENSURING A FORWARD LOOKING APPROACH TO FINANCIAL STABILITY AND RESILIENCE ENSURING A RESILIENT AND SECURE DIGITALISATION Similar to other sectors of the economy, incumbent undertakings as well as InsurTech start-ups increasingly rely on information and communication technology (ICT) systems in the provision of insurance and pensions services. Among other benefits, the increasing adoption of innovative ICT allow undertakings to implement more efficient processes and reduce operational costs, enable data tracking and data backups in case of incidents, as well as greater accessibility and collaboration within the organisation (e.g. via cloud computing systems).
However, undertakings’ operations are also increasingly vulnerable to ICT security incidents, including cyberattacks. Furthermore, the complexity of some ICT or a different governance applied to new technologies (e.g. cloud computing) is increasing as well as the frequency of ICT related incidents (e.g. cyber incidents), which can have a considerable impact on undertakings’ operational functioning. Moreover, relevance of larger ICT service providers could also lead to concentration and contagion risks. Supervisory authorities need to take into account these developments and adapt their supervisory skills and competences accordingly.
Early on, EIOPA identified cyber security and ICT resilience as a key policy priority and in the years to come will focus on the implementation of those priorities, including the recently adopted cloud computing and ICT guidelines, and on the upcoming implementation of the Digital Operational Resilience Act (DORA).
ASSESSING THE PRUDENTIAL FRAMEWORK IN THE LIGHT OF DIGITALISATION The Solvency II Directive sets out requirements applicable to insurance and reinsurance undertakings in the EU with the aim to ensure their financial soundness and provide adequate protection to policyholders and beneficiaries. The Solvency II Directive follows a proportional, risk-based and technology-neutral approach and therefore it remains fully relevant in the context of digitalisation. Under this approach, all undertakings, including start-ups that wish to obtain a licence to benefit from Solvency II’s pass-porting rights to access the Internal Market via digital (and non-digital) distribution channels need to meet the requirements foreseen in the Directive, including minimal capital.
A prudential evaluation respective digital transformation processes should consider that insurance undertakings are incurring in high IT-related costs, to be appropriately reflected in their balance sheet. Furthermore, Solvency II requirement on outsourcing and the system of governance requirements are also relevant, in light of the increasing collaboration with third-party service providers (including BigTechs) and the use of new technologies such as AI. Investments on novel assets such as crypto-assets as well as the trend towards the “platformisation” of the economy are also relevant from a prudential perspective and the type of activities developed by insurance undertakings.
EIOPA considers that it is important to assess the prudential framework in light of the digital transformation that is taking place in the sector, seeking to ensure its financial soundness, promote greater supervisory convergence and also assess whether digital activities and related risks are adequately captured and if there are any undue regulatory barriers to digitalisation in this area.
REALISING THE BENEFITS OF THE EUROPEAN SINGLE MARKET SUPPORTING THE DIGITAL SINGLE MARKET FOR INSURANCE AND PENSION PRODUCTS Digital distribution can readily cross borders and reduce linguistic and other barriers; economies of scale linked to offering products to a wider market, increased competition, and greater variety of products and services for consumers are some of the benefits arising from the European Internal Market.
However, the scaling up the scope and speed of distribution of products and services across the Internal Market is an area where there is still a major untapped potential. Indeed, while legislative initiatives such as the
Insurance Distribution Directive (IDD),
Solvency II Directive,
Packaged Retail and Insurance-based Investment Products (PRIIPs) Regulation,
or the Directive on the activities and supervision of institutions for occupational retirement provision (IORP II)16
have made considerable progress towards the convergence of national regimes in Europe, considerable supervisory and regulatory divergences still persist amongst EU Member States.
For example, the IDD is a minimum harmonisation Directive. Existing regulation does not always allows for a fully digital approach. For instance, the need to use non-digital signatures or paper-based requirements as established by Article 23 (1) (a) IDD and Article 14 (2) (a) PRIIPs Regulation can limit end-to-end digital workflows. It is critical that the opportunities – and risks, for instance in relation to financial inclusion and accessibility – that come with digital transformations are fully integrated into future policy work. In this context, the so-called 28th regime used in Regulation on a pan-European Personal Pension Product (PEPP)17, which does not replace or harmonise national systems but coexists with them, is an approach that could eventually be explored taking into account the lessons learned.
EIOPA supports the development of the Internal Market in times of transformation, through the recalibration where needed of the IDD, Solvency II, PRIIPS and IORP II from a digital single market perspective. EIOPA will also explore what a digital single market for insurance might look like from a regulatory and supervisory perspective. Furthermore, EIOPA will integrate a digital ‘sense check’ into all of its policy work, where relevant.
SUPPORTING INNOVATION FACILITATORS IN EUROPE In recent years many NCAsin the EU have adopted initiatives to facilitate financial innovation. These initiatives include the establishment of innovation facilitators such as ‘innovation hubs’ and ‘regulatory sandboxes’ to exchange views and experience concerning Fintech-related regulatory issues and enable the testing and development of innovative solutions in a controlled environment and to learn more as to supervisory expectations. These initiatives also allow supervisory authorities to gather a better understanding of the new technologies and business models taking place in the market.
At European level, the European Forum for Innovation Facilitators (EFIF), created in 2019, has become an important forum where European supervisors share experiences from their national innovation facilitators and discuss with stakeholders topics such as Artificial Intelligence, Platformisation, RegTech or crypto-assets. The EFIF will soon be complemented with the Commission’s Digital Finance platform; a new digital interface where stakeholders of the digital finance ecosystem will be able to interact.
Innovation facilitators can play a key role in the implementation and adoption of innovative technologies and business models in Europe and EIOPA will continue to support them through its work in the EFIF and the upcoming Digital Finance Platform. EIOPA will work to further facilitate cross-border / cross-sector cooperation and information exchanges on emergent business models.
ADDRESSING THE OPPORTUNITIES AND CHALLENGES OF FRAGMENTED VALUE CHAINS AND THE PLATFORM ECONOMY New actors including InsurTech start-ups and BigTech companies are entering the insurance market, both as competitors as well as cooperation partners of incumbent insurance undertakings.
Concerning the latter, incumbent undertakings reportedly increasingly revert to third-party service providers to gain quick and efficient access to new technologies and business models. For example, based on in EIOPA’s Big Data Analytics thematic review, while the majority of the participating insurance undertakings using BDA solutions in the area of claims management developed these tools in-house, two thirds of the undertakings reverted to outsourcing arrangements in order to implement AI-powered chatbots.
This trend is reinforced by the platformisation of the economy, which in the insurance sector goes beyond traditional comparison websites and is reflected in the development of complex ecosystems integrating different stakeholders. They often share data via Application Programming Interfaces (APIs) and cooperate in the distribution of insurance products via platforms (including those of BigTechs) embedded (bundled) with other financial and non-financial services. In addition, in a broader context of Decentralised Finance (DEFI), Peer-to-Peer (P2P) insurance business models using digital platforms and different levels of decentralisation to interact with members with similar risks profiles have also emerged in several jurisdiction; although their significance in terms of gross written premiums is very limited to date, it is a matter that needs to be monitored.
EIOPA notes the opportunities and challenges arising from increasingly fragmented value chains and the platformisation of economy which will be reflected in the ESAs upcoming technical advice on digital finance to the European Commission, and will subsequently support any measures within its remit that may be needed to
encourage innovation and competition,
safeguard financial stability
and ensure a level playing field.
ENHANCING THE SUPERVISORY CAPABILITIES OF EIOPA AND NCAS LEVERAGING ON TECHNOLOGY AND DATA FOR MORE EFFICIENT SUPERVISION AND REGULATORY COMPLIANCE Digital technologies can also help supervisors to implement more agile and efficient supervisory processes (commonly known as Suptech). They can support a continuous improvement of internal processes as well as business intelligence capabilities, including enhancing the analytical framework, the development of risk assessments and the publication of statistics. This can also include new capabilities for identifying and assessing conduct risks.
With its European perspective, EIOPA can play a key role by enhancing NCAs data analysis capabilities based on extensive and rich datasets and appropriate processing tools.
As outlined in its SupTech strategy and Data and IT strategy, EIOPA has the objective to promote its own transformation to become a digital, user-focused and data driven organisation that meets its strategic objectives effectively and efficiently. Several on-going projects are already in place to achieve this objective.
INCREASING THE UNDERSTANDING OF NEW TECHNOLOGIES BY SUPERVISORS IN CLOSE COOPERATION WITH STAKEHOLDERS Building supervisory capacity and convergence is a critical enabler for other benefits of digitalisation; without strong and convergent supervision, other benefits may be compromised. With the use of different tools available (innovation hubs, regulatory sandboxes, market monitoring, public consultations, desk-based reports etc.), supervisors seek to understand, engage and supervise increasingly technology-driven undertakings.
Closely cooperating with stakeholders with hands-on experience on the use of innovative tools has proofed to be useful tool to improve the knowledge by supervisors, and also for the stakeholders it is important to understand what are the supervisory expectations.
Certainly, the profile of the supervisors needs to evolve and they need to extend their knowledge into new areas and understand how new business models and value chains may impact undertakings and intermediaries both from a conduct and from a prudential perspective. Moreover, in view of the growing importance of new technologies and business models for insurance undertakings and pensions schemes, it is important to ensure that supervisors have access to relevant data about these developments in order to enable an evidence-based supervision.
EIOPA aims to continue incentivising the sharing of knowledge and experience amongst NCAs by organising InsurTech roundtables, workshops and seminars for supervisors as well as pursuing further potential deep-dive analysis on certain financial innovation topics. EIOPA will also further emphasise an evidence-based supervisory approach by developing a regular collection of harmonised data on digitalisation topics. EIOPA will also develop a stakeholder engagement strategy on digitalisation topics to identify those actors and areas where the cooperation should be reinforced.
International Financial Reporting Standard (IFRS) 17, the first comprehensive global accounting standard for insurance products, is due to be implemented in 2023, and is the latest standard developed by the International Accounting Standards Board (IASB) in its push for international accounting standards.
IFRS 17, following other standards such as IFRS 9 and Current Expected Credit Losses (CECL), is the latest move toward ‘risk-ware accounting’, a framework that aims to incorporate financial and non-financial risk into accounting valuation.
As a principles-based standard, IFRS 17 provides room for different interpretations, meaning that insurers have choices to make about how to comply. The explicit integration of financial and non-financial risk has caused much discussion about the unprecedented and distinctive modeling challenges that IFRS 17 presents. These could cause ‘tunnel vision’ among insurers when it comes to how they approach compliance.
But all stages of IFRS 17 compliance are important, and each raises distinct challenges. By focusing their efforts on any one aspect of the full compliance value chain, insurers can risk failing to adequately comply. In the case of IFRS 17, it is not necessarily accidental non-compliance that is at stake, but rather the sub-optimal presentation of the business’ profits.
To achieve ‘ideal’ compliance, firms need to focus on the logistics of reporting as much as on the mechanics of modeling. Effective and efficient reporting comprises two elements: presentation and disclosure. Reporting is the culmination of the entire compliance value chain, and decisions made further up the chain can have a significant impact on the way that value is presented. Good reporting is achieved through a mixture of technology and accounting policy, and firms should follow several strategies in achieving this:
Anticipate how the different IFRS 17 measurement models will affect balance sheet volatility.
Understand the different options for disclosure, and which approach is best for specific institutional needs.
Streamline IFRS 17 reporting with other reporting duties.
Where possible, aim for collaborative report generation while maintaining data integrity.
Explore and implement technology that can service IFRS 17’s technical requirements for financial reporting.
Store and track data on a unified platform.
In this report we focus on the challenges associated with IFRS 17 reporting, and consider solutions to those challenges from the perspectives of accounting policy and technology implementation. And in highlighting the reporting stage of IFRS 17 compliance, we focus specifically on how decisions about the presentation of data can dictate the character of final disclosure.
Introduction: more than modeling
IFRS 17 compliance necessitates repeated stochastic calculations to capture financial and nonfinancial risk (especially in the case of long-term insurance contracts). Insurance firms consistently identify modeling and data management as the challenges they most anticipate having to address in their efforts to comply. Much of the conversation and ‘buzz’ surrounding IFRS 17 has therefore centered on its modeling requirements, and in particular the contractual service margin (CSM) calculation.
But there is always a danger that firms will get lost in the complexity of compliance and forget the aim of IFRS 17. Although complying with IFRS 17 involves multiple disparate process elements and activities, it is still essentially an accounting standard. First and foremost its aim is to ensure the transparent and comparable disclosure of the value of insurance services. So while IFRS 17 calculations are crucial, they are just one stage in the compliance process, and ultimately enable the intended outcome: reporting.
Complying with the modeling requirements of IFRS 17 should not create ‘compliance tunnel vision’ at the expense of the presentation and disclosure of results. Rather, presentation and disclosure are the culmination of the IFRS 17 compliance process flow and are key elements of effective reporting (see Figure 1).
Developing an IFRS 17 accounting policy
A key step in developing reporting compliance is having an accounting policy tailored to a firm’s specific interaction with IFRS 17. Firms have decisions to make about how to comply, together with considerations of the knock-on effects IFRS 17 will have on the presentation of their comprehensive statements of income.
There are a variety of considerations: in some areas IFRS 17 affords a degree of flexibility; in others it does not. Areas that will substantially affect the appearance of firms’ profits are:
• The up-front recognition of loss and the amortization of profit. • The new unit of account. • The separation of investment components from insurance services. • The recognition of interest rate changes under the general measurement model (GMM). • Deferred acquisition costs under the premium allocation approach (PAA).
As a principles-based standard, IFRS 17 affords a degree of flexibility in how firms approach valuation. One of its aims is to insure that entity specific risks and diverse contract features are adequately reflected in valuations, while still safeguarding reporting comparability. This flexibility also gives firms some degree of control over the way that value and risk are portrayed in financial statements. However, some IFRS 17 stipulations will lead to inevitable accounting mismatches and balance-sheet volatility.
Accounting policy impacts and choices – Balance sheet volatility
One unintended consequence of IFRS 17 compliance is balance sheet volatility. As an occurrence of risk-aware accounting, IFRS 17 requires the value of insurance services to be market-adjusted. This adjustment is based on a firm’s projection of future cash flow, informed by calculated financial risk. Moreover, although this will not be the first time firms are incorporating non-financial risk into valuations, it is the first time it has to be explicit.
Market volatility will be reflected in the balance sheet, as liabilities and assets are subject to interest rate fluctuation and other financial risks. The way financial risk is incorporated into the value of a contract can also contribute to balance sheet volatility. The way it is incorporated is dictated by the measurement model used to value it, which depends on the eligibility of the contract.
There are three measurement models, the PAA, the GMM and the variable fee approach (VFA). All three are considered in the next section.
The three measurement models
Features of the three measurement models (see Figure 2) can have significant effects on how profit – represented by the CSM – is presented and ultimately disclosed.
To illustrate the choices around accounting policy that insurance firms will need to consider and make, we provide two specific examples, for the PAA and the GMM.
Accounting policy choices: the PAA
When applying the PAA to shorter contracts – generally those of fewer than 12 months – firms have several choices to make about accounting policy. One is whether to defer acquisition costs. Unlike previous reporting regimes, under IFRS17’s PAA indirect costs cannot be deferred as acquisition costs. Firms can either expense these costs upfront or defer them and amortize the cost over the length of the contract. Expensing acquisition costs as they are incurred may affect whether a group of contracts is characterized as onerous at inception. Deferring acquisition costs reduces the liability for the remaining coverage; however, it may also increase the loss recognized in the income statement for onerous contracts.
Accounting policy choices: the GMM
Under IFRS 17, revenue is the sum of
the release of CSM,
changes in the risk adjustment,
and expected net cash outflows, excluding any investment components.
Excluding any investment component from revenue recognition will have significant impacts on contracts being sold by life insurers.
Contracts without direct participation features measured under the GMM use a locked-in discount rate – whether this is calculated ‘top down’ or ‘bottom up’ is at the discretion of the firm. Changes to the CSM have to be made using the discount rate set at the initial recognition of the contract. Changes in financial variables that differ from the locked-in discount rate cannot be integrated into the CSM, so appear as insurance service value.
A firm must account for the changes directly in the comprehensive income statement, and this can also contribute to balance sheet volatility.
As part of their accounting policy firms have a choice about how to recognize changes in discount rates and other changes to financial risk assumptions – between other comprehensive income (OCI) and profit and loss (P&L). Recognizing fluctuations in discount rates and financial risk in the OCI reduces some volatility in P&L. Firms also recognize the fair value of assets in the OCI under IFRS 9.
The technology perspective
Data integrity and control
At the center of IFRS 17 compliance and reporting is the management of a wide spectrum of data – firms will have to gather and generate data from historic, current and forward-looking perspectives.
Creating IFRS 17 reports will be a non-linear process, and data will be incorporated as it becomes available from multiple sources. For many firms, contending with this level of data granularity and volume will be a big leap from other reporting requirements. The maturity of an insurer’s data infrastructure is partly defined by the regulatory and reporting context it was built in, and in which it operates – entities across the board will have to upgrade their data management technology.
In regions such as Southeast Asia and the Middle East, however, data management on the scale of IFRS 17 is unprecedented. Entities operating in these regions in particular will have to expend considerable effort to upgrade their infrastructure. Manual spreadsheets and complex legacy systems will have to be replaced with data management technology across the compliance value chain.
According to a 2018 survey by Deloitte, 87% of insurers believed that their systems technology required upgrades to capture the new data they have to handle and perform the calculations they require for compliance. Capturing data inputs was cited as the biggest technology challenge.
Tracking and linking the data lifecycle
Compliance with IFRS 17 demands data governance across the entire insurance contract valuation process. The data journey starts at the data source and travels through aggregation and modeling processes all the way to the disclosure stage (see Figure 3).
In this section we focus on the specific areas of data lineage, data tracking and the auditing processes that run along the entire data compliance value chain. For contracts longer than 12 months, the valuation process will be iterative, as data is transformed multiple times by different users. Having a single version of reporting data makes it easier to collaborate, track and manage the iterative process of adapting to IFRS 17. Cloud platforms help to address this challenge, providing an effective means of storing and managing the large volumes of reporting data generated by IFRS 17. The cloud allows highly scalable, flexible technology to be delivered on demand, enabling simultaneous access to the same data for internal teams and external advisors.
It is essential that amendments are tracked and stored as data falls through different hands and passes through different IFRS 17 ‘compliance stages’. Data lineage processes can systematically track users’ interactions with data and improve the ‘auditability’ of the compliance process and users’ ‘ownership’ of activity.
Data linking is another method of managing IFRS 17 reporting data. Data linking contributes to data integrity while enabling multiple users to make changes to data. It enables the creation of relationships across values while maintaining the integrity of the source value, so changing the source value creates corresponding changes across all linked values. Data linking also enables the automated movement of data from spreadsheets to financial reports, updating data as it is changed and tracking users’ changes to it.
Disclosing the data
Highlighting how IFRS 17 is more than just a compliance exercise, it will have a fundamental impact on how insurance companies report their data internally, to regulators, and to financial markets. For the final stage of compliance, firms will need to adopt a new format for the balance sheet, P&L statement and cash flow statements.
In addition to the standard preparation of financial statements, IFRS 17 will require a number of disclosures, including the explanation of recognized amounts, significant judgements made in applying IFRS 17, and the nature and extent of risks arising from insurance contracts. As part of their conversion to IFRS 17, firms will need to assess how data will have to be managed on a variety of levels, including
internal key performance indicators
and communications to financial markets.
Communication with capital markets will be more complex, because of changes that will have to be made in several areas:
The presentation of financial results.
Explanations of how calculations were made, and around the increased complexity of the calculations.
Footnotes to explain how data is being reported in ‘before’ and ‘after’ conversion scenarios.
During their transition, organizations will have to report and explain to the investor community which changes were the result of business performance and which were the result of a change in accounting basis. The new reporting basis will also impact how data will be reported internally, as well as overall effects on performance management. The current set of key metrics used for performance purposes, including volume, revenue, risk and profitability, will have to be adjusted for the new methodology and accounting basis. This could affect how data will be reported on and reconciled for current regulatory reporting requirements including Solvency II, local solvency standards, and broader statutory and tax reporting.
IFRS 17 will drive significant changes in the current reporting environment. To address this challenge, firms must plan how they will manage both the pre-conversion and post-conversion data sets, the preparation of pre-, post-, and comparative financial statements, and the process of capturing and disclosing all of the narrative that will support and explain these financial results.
In addition, in managing the complexity of the numbers and the narrative before, during and after the conversion, reporting systems will also need to scale to meet the requirements of regulatory reporting – including disclosure in eXtensible Business Reporting Language (XBRL) in some jurisdictions. XBRL is a global reporting markup language that enables the encoding of documents in a human and machine-legible format for business reporting (The IASB publishes its IFRS Taxonomy files in XBRL).
But XBRL tagging can be a complex, time-consuming and repetitive process, and firms should consider using available technology partners to support the tagging and mapping demands of document drafting.
Businesses that use legacy data sources such as mainframe have invested heavily in building a reliable data platform. At the same time, these enterprises want to move data into the cloud for the latest in analytics, data science and machine learning.
The Importance of Legacy Data
Mainframe is still the processing backbone for many organizations, constantly generating important business data.
It’s crucial to consider the following:
MAINFRAME IS THE ENTERPRISE TRANSACTION ENVIRONMENT
In 2019, there was a 55% increase in transaction volume on mainframe environments. Studies estimate that 2.5 billion transactions are run per day, per legacy system across the world.
LEGACY IS THE FUEL BEHIND CUSTOMER EXPERIENCES
Within industries such as financial services and insurance, most customer information lives on legacy systems. Over 70% of enterprises say their customer-facing applications are completely or very reliant on mainframe processing.
BUSINESS-CRITICAL APPLICATIONS RUN ON LEGACY SYSTEMS
Mainframe often holds business-critical information and applications — from credit card transactions to claims processing. For over half of enterprises with a mainframe, they run more than half of their business-critical applications on the platform.
However, they also present a limitation for an organization in its analytics and data science journey. While moving everything to the cloud may not be the answer, identifying ways in which you can start a legacy modernization process is crucial to the next generation of data and AI initiatives.
The Cost of Legacy Data
Across the enterprise, legacy systems such as mainframe serve as a critical piece of infrastructure that is ripe with opportunity for integration with modern analytics platforms. If a modern analytics platform is only as good as the data fed into it, that means enterprises must include all data sources for success. However, many complexities can occur when organizations look to build the data integration pipelines between their modern analytics platform and legacy sources. As a result, the plans made to connect these two areas are often easier said than done.
DATA SILOS HINDER INNOVATION
Over 60% of IT professionals with legacy and modern technology in house are finding that data silos are negatively affecting their business. As data volumes increase, IT can no longer rely on current data integration approaches to solve their silo challenges.
CLOUDY BUSINESS INSIGHTS
Business demands that more decisions are driven by data. Still, few IT professionals who work with legacy systems feel they are successful in delivering data insights that reside outside their immediate department. Data-driven insights will be the key to competitive success. The inability to provide insights puts a business at risk.
SKILLS GAP WIDENS
While it may be difficult to find skills for the latest technology, it’s becoming even harder to find skills for legacy platforms. Enterprises have only replaced 37% of the mainframe workforce lost over the past five years. As a result, the knowledge needed to integrate mainframe data into analytics platforms is disappearing. While the drive for building a modern analytics platform is more powerful than ever, taking this initiative and improving data integration practices that encompass all enterprise data has never been more challenging.
The success of building a modern analytics platform hinges on understanding the common challenges of integrating legacy data sources and choosing the right technologies that can scale with the changing needs of your organization.
Challenges Specific to Extracting Mainframe Data
With so much valuable data on mainframe, the most logical thing to do would be to connect these legacy data sources to a modern data platform. However, many complexities can occur when organizations begin to build integration pipelines to legacy sources. As a result, the plans made to connect these two areas are often easier said than done. Shared challenges of extracting mainframe data for integration with modern analytics platforms include the following:
It’s common for legacy data not to be readily compatible with downstream analytics platforms, open-source frameworks and data formats. The varied structures of legacy data sources differ from relational data. Legacy data sources have traits such as
and trailer and complex data structures (e.g., nested, repeated or redefined elements).
With the incorrect COBOL redefines and logic set up at the start of a data integration workflow, legacy data structures risk slowing down processing speeds to the point of business disruption and can lead to incorrect data for downstream consumption.
COBOL copybooks can be a massive hurdle to overcome for integrating mainframe data. COBOL copybooks are the metadata blocks that define the physical layout of data but are stored separately from that data. As a result, they can be quite complicated, containing not just formatting information, but also logic in the form, for example, of nested Occurs Depending On clauses. For many mainframe files, hundreds of copybooks may map to a single file. Feeding mainframe data directly into an analytics platform can result in significant data confusion.
Unlike an RDBMS, which needs data to be entered into a table or column, nothing enforces a set data structure on the mainframe. COBOL copybooks are incredibly flexible so that they
can group multiple pieces into one,
or subdivide a field into various fields,
or ignore whole sections of a record.
As a result, data mapping issues will arise. The copybooks reflect the needs of the program, not the needs of a data-driven view.
DIFFERENT STORAGE FORMATS
Often numeric values stored one way on a mainframe are stored differently when the data is moving to the cloud. Additionally, mainframes use a whole different encoding scheme (EBCDIC vs. ASCII) — it’s an 8-bit structure vs. a 7-bit structure. As a result, multiple numeric encoding schemes allow for the ability to “pack” numbers into less storage (e.g., packed decimal) space. In addition to complex storage formats, there are techniques to use each individual bit to store data.
Whether it’s a lack of internal knowledge on how to handle legacy data or a rigid data framework, ignoring legacy data when building a modern data analytics platform means missing valuable information that can enhance any analytics project.
Pain Points of Building a Modern Analytics Platform
Tackling the challenges of mainframe data integration is no simple task. Besides determining the best approach for integrating these legacy data sources, IT departments are also dealing with the everyday challenges of running a department. Regardless of the size of an organization, there are daily struggles everyone faces, from siloed data to lack of IT skills.
Many organizations have adopted hybrid and multi-cloud strategies to
manage data proliferation,
and increase capacities.
Cloud storage and the lakehouse architecture offer new ways to manage and store data. However, organizations still need to maintain and integrate their mainframes and other on-premises systems — resulting in a challenging integration strategy that must encompass a variety of environments.
The increase in data silos adds further complexity to growing data volumes. Data silo creation happens as a direct result of increasing data sources. Research has shown that data silos have directly inhibited the success of analytics and machine learning projects.
Processing the requirements of growing data volumes can cause a slowdown in a data stream. Loading hundreds, or even thousands, of database tables into a big data platform — combined with an inefficient use of system resources — can create a data bottleneck that hampers the performance of data integration pipelines.
Industry studies have shown that up to 90% of a data scientist’s time is getting data to the right condition for use in analytics. In other words, 90% of the time, data feeding analytics cannot be trusted. Data quality processes that include
and actionable data
are critical to providing frameworks with trusted data.
DATA TYPES AND FORMATS
Valuable data for analytics comes from a range of sources across the organization from CRM, ERPs, mainframes and online transaction processing systems. However, as organizations rely on more systems, the data types and formats continue to grow.
IT now has the challenge of making big data, NoSQL and unstructured data all readable for downstream analytics solutions.
SKILLS GAP AND RESOURCES
The need for workers who understand how to build data integration frameworks for mainframe, cloud, and cluster data sources is increasing, but the market cannot keep up. Studies have shown that unfilled data engineer jobs and data scientist jobs have increased 12x in the past year alone. As a result, IT needs to figure out how to integrate data for analytics with the skills they have internally.
What Your Cloud Data Platform Needs
A new data management paradigm has emerged that combines the best elements of data lakes and data warehouses, enabling
and machine learning
on all your business data: lakehouse.
Lakehouses are enabled by a new system design: implementing similar data structures and data management features to those in a data warehouse, directly on the kind of low-cost storage used for data lakes. They are what you would get if you had to redesign data warehouses in the modern world, now that cheap and highly reliable storage (in the form of object stores) are available.
This new paradigm is the vision for data management that provides the best architecture for modern analytics and AI. It will help organizations capture data from hundreds of sources, including legacy systems, and make that data available and ready for analytics, data science and machine learning.
A lakehouse has the following key features:
Open storage formats, such as Parquet, avoid lock-in and provide accessibility to the widest variety of analytics tools and applications
Decoupled storage and compute provides the ability to scale to many concurrent users by adding compute clusters that all access the same storage cluster
Transaction support handles failure scenarios and provides consistency when multiple jobs concurrently read and write data
Schema management enforces the expected schema when needed and handles evolving schemas as they change over time
Business intelligence tools directly access the lakehouse to query data, enabling access to the latest data without the cost and complexity of replicating data across a data lake and a data warehouse
Data science and machine learning tools used for advanced analytics rely on the same data repository
First-class support for all data types across structured, semi-structured and unstructured, plus batch and streaming data
Customer Experience (CX) is a catchy business term that has been used for decades, and until recently, measuring and managing it was not possible. Now, with the evolution of technology, a company can build and operationalize a true CX program.
For years, companies championed NPS surveys, CSAT scores, web feedback, and other sources of data as the drivers of “Customer Experience” – however, these singular sources of data don’t give a true, comprehensive view of how customers feel, think, and act. Unfortunately, most companies aren’t capitalizing on the benefits of a CX program. Less than 10% of companies have a CX executive and of those companies, only 14% believe Customer Experience, as a program, is the aggregation and analysis of all customer interactions with the objective of uncovering and disseminating insights across the company in order to improve the experience. In a time where the customer experience separates the winners from the losers, CX must be more of a priority for ALL businesses.
This not only includes the analysis of typical channels in which customers directly interact with your company (calls, chats, emails, feedback, surveys, etc.) but all the channels in which customers may not be interacting directly with you – social, reviews, blogs, comment boards, media, etc.
In order to understand the purpose of a CX team and how it operates, you first need to understand how most businesses organize, manage, and carry out their customer experiences today.
Essentially, a company’s customer experience is owned and managed by a handful of teams. This includes, but is not limited to:
and customer service.
All of these teams have a hand in customer experience.
In order to affirm that they are working towards a common goal, they must
communicate in a timely manner,
meet and discuss upcoming initiatives and projects,
and discuss results along with future objectives.
In a perfect world, every team has the time and passion to accomplish these tasks to ensure the customer experience is in sync with their work. In reality, teams end up scrambling for information and understanding of how each business function is impacting the customer experience – sometimes after the CX program has already launched.
This process is extremely inefficient and can lead to serious problems across the customer experience. These problems can lead to irreparable financial losses. If business functions are not on the same page when launching an experience, it creates a broken one for customers. Siloed teams create siloed experiences.
There are plenty of companies that operate in a semi-siloed manner and feel it is successful. What these companies don’t understand is that customer experience issues often occur between the ownership of these silos, in what some refer to as the “customer experience abyss,” where no business function claims ownership. Customers react to these broken experiences by communicating their frustration through different communication channels (chats, surveys, reviews, calls, tweets, posts etc.).
For example, if a company launches a new subscription service and customers are confused about the pricing model, is it the job of customer service to explain it to customers? What about those customers that don’t contact the business at all? Does marketing need to modify their campaigns? Maybe digital needs to edit the nomenclature online… It could be all of these things. The key is determining which will solve the poor customer experience.
The objective of a CX program is to focus deeply on what customers are saying and shift business teams to become advocates for what they say. Once advocacy is achieved, the customer experience can be improved at scale with speed and precision. A premium customer experience is the key to company growth and customer retention. How important is the customer experience?
You may be saying to yourself, “We already have teams examining our customer data, no
need to establish a new team to look at it.” While this may be true, the teams are likely taking a siloed approach to analyzing customer data by only investigating the portion of the data they own.
For example, the social team looks at social data, the digital team analyzes web feedback and analytics, the marketing team reviews surveys and performs studies, etc. Seldom do these teams come together and combine their data to get a holistic view of the customer. Furthermore, when it comes to prioritizing CX improvements, they do so based on an incomplete view of the customer.
Consolidating all customer data gives a unified view of your customers while lessening the workload and increasing the rate at which insights are generated. The experience customers have with marketing, digital, and customer service, all lead to different interactions. Breaking these interactions into different, separate components is the reason companies struggle with understanding the true customer experience and miss the big picture on how to improve it.
The CX team, once established, will be responsible for creating a unified view of the customer which will provide the company with an unbiased understanding of how customers feel about their experiences as well as their expectations of the industry. These insights will provide awareness, knowledge, and curiosity that will empower business functions to improve the end-to-end customer experience.
CX programs are disruptive. A successful CX program will uncover insights that align with current business objectives and some insights that don’t at all. So, what do you do when you run into that stone wall? How do you move forward when a business function refuses to adopt the voice of the customer? Call in back-up from an executive who understands the value of the voice of the customer and why it needs to be top-of mind for every function.
When creating a disruptive program like CX, an executive owner is needed to overcome business hurdles along the way. Ideally, this executive owner will support the program and promote it to the broader business functions. In order to scale and become more widely adopted, it is also helpful to have executive support when the program begins.
The best candidates for initial ownership are typically marketing, analytics or operations executives. Along with understanding the value a CX program can offer, they should also understand the business’ current data landscape and help provide access to these data sets. Once the CX team has access to all the available customer data, it will be able to aggregate all necessary interactions.
Executive sponsors will help dramatically in regard to CX program adoption and eventual scaling. Executive sponsors
can provide the funding to secure the initial success,
promote the program to ensure other business functions work closer to the program,
and remove roadblocks that may otherwise take weeks to get over.
Although an executive sponsor is not necessary, it can make your life exponentially easier while you build, launch, and execute your CX program. Your customers don’t always tell you what you want to hear, and that can be difficult for some business functions to handle. When this is the case, some business functions will try to discredit insights altogether if they don’t align with their goals.
Data grows exponentially every year, faster than any company can manage. In 2016, 90% of the world’s data had been created in the previous two years. 80% of that data was unstructured language. The hype of “Big Data” has passed and the focus is now on “Big Insights” – how to manage all the data and make it useful. A company should not be allocating resources to collecting more data through expensive surveys or market research – instead, they should be focused on doing a better job of listening and reacting to what customers are already saying, by unifying the voice of the customer with data that is already readily available.
It’s critical to identify all the available customer interactions and determine value and richness. Be sure to think about all forms of direct and indirect interactions customers have. This includes:
These channels are just a handful of the most popular avenues customers use to engage with brands. Your company may have more, less, or none of these. Regardless, the focus should be on aggregating as many as possible to create a holistic view of the customer. This does not mean only aggregating your phone calls and chats; this includes every channel where your customers talk with, at, or about your company. You can’t be selective when it comes to analyzing your customers by channel. All customers are important, and they may have different ways of communicating with you.
Imagine if someone only listened to their significant other in the two rooms where they spend the most time, say the family room and kitchen. They would probably have a good understanding of the overall conversations (similar to a company only reviewing calls, chats, and social). However, ignoring them in the dining room, bedroom, kids’ rooms, and backyard, would inevitably lead to serious communication problems.
It’s true that phone, chat, and social data is extremely rich, accessible, and popular, but that doesn’t mean you should ignore other customers. Every channel is important. Each is used by a different customer, in a different manner, and serves a different purpose, some providing more context than others.
You may find your most important customers aren’t always the loudest and may be interacting with you through an obscure channel you never thought about. You need every customer channel to fully understand their experience.
We know that businesses and government entities alike struggle to manage compliance requirements. Many have put up with challenges for so long—often with limited resources—that they no longer see how problematic the situation has become.
FIVE COMPLIANCE CHALLENGES YOU MIGHT BE DEALING WITH
01 COMPLIANCE SILOS
It’s not uncommon that, over time, separate activities, roles, and teams develop to address different compliance requirements. There’s often a lack of integration and communication among these teams or individuals. The result is duplicated efforts—and the creation of multiple clumsy and inefficient systems. This is then perpetuated as compliance processes change in response to regulations, mergers and acquisitions, or other internal business re-structuring.
02 NO SINGLE VIEW OF COMPLIANCE ASSURANCE
Siloed compliance systems also make it hard for senior management to get an overview of current compliance activities and perform timely risk assessments. If you can’t get a clear view of compliance risks, then chances are good that a damaging risk will slip under the radar, go unaddressed, or simply be ignored.
03 COBBLED TOGETHER, HOME-GROWN SYSTEMS
Using generalized software, like Excel spreadsheets and Word documents, in addition to shared folders and file systems, might have made sense at one point. But, as requirements become more complex, these systems become more frustrating, inefficient, and risky. Compiling hundreds or thousands of spreadsheets to support compliance management and regulatory reporting is a logistical nightmare (not to mention time-consuming). Spreadsheets are also prone to error and limited because they don’t provide audit trails or activity logs.
04 OLD SOFTWARE, NOT DESIGNED TO KEEP UP WITH FREQUENT CHANGES
You could be struggling with older compliance software products that aren’t designed to deal with constant change. These can be increasingly expensive to upgrade, not the most user-friendly, and difficult to maintain.
05 NOT USING AUTOMATED MONITORING
Many compliance teams are losing out by not using analytics and data automation. Instead, they rely heavily on sample testing to determine if compliance controls and processes are working, so huge amounts of activity data is never actually checked.
Transform your compliance management process
Good news! There’s some practical steps you can take to transform compliance processes and systems so that they become way more efficient and far less expensive and painful.
It’s all about optimizing the interactions of people, processes, and technology around regulatory compliance requirements across the entire organization.
It might not sound simple, but it’s what needs to be done. And, in our experience, it can be achieved without becoming massively time-consuming and expensive. Technology for regulatory compliance management has evolved to unite processes and roles across all aspects of compliance throughout your organization.
Look, for example, at how technology like Salesforce (a cloud-based system with big data analytics) has transformed sales, marketing, and customer service. Now, there’s similar technology which brings together different business units around regulatory compliance to improve processes and collaboration for the better.
Where to start?
Let’s look at what’s involved in establishing a technology-driven compliance management process. One that’s driven by data and fully integrated across your organization.
THE BEST PLACE TO START IS THE END
Step 1: Think about the desired end-state.
First, consider the objectives and the most important outcomes of your new process. How will it impact the different stakeholders? Take the time to clearly define the metrics you’ll use to measure your progress and success.
A few desired outcomes:
Accurately measure and manage the costs of regulatory and policy compliance.
Track how risks are trending over time, by regulation, and by region.
Understand, at any point in time, the effectiveness of compliance-related controls.
Standardize approaches and systems for managing compliance requirements and risks across the organization.
Efficiently integrate reporting on compliance activities with those of other risk management functions.
Create a quantified view of the risks faced due to regulatory compliance failures for executive management.
Increase confidence and response times around changing and new regulations.
Reduce duplication of efforts and maximize overall efficiency.
NOW, WHAT DO YOU NEED TO SUPPORT YOUR OBJECTIVES?
Step 2: Identify the activities and capabilities that will get you the desired outcomes.
Consider the different parts of the compliance management process below. Then identify the steps you’ll need to take or the changes you’ll need to make to your current activity that will help you achieve your objectives. We’ve put together a cheat sheet to help this along.
IDENTIFY & IMPLEMENT COMPLIANCE CONTROL PROCEDURES
01 Maintain a central library of regulatory requirements and internal corporate policies, allocated to owners and managers.
02 Define control processes and procedures that will ensure compliance with regulations and policies.
03 Link control processes to the corresponding regulations and corporate policies.
04 Assess the risk of control weaknesses and failure to comply with regulations and policies.
RUN TRANSACTIONAL MONITORING ANALYTICS
05 Monitor the effectiveness of controls and compliance activities with data analytics.
06 Get up-to-date confirmation of the effectiveness of controls and compliance from owners with automated questionnaires or certification of adherence statements.
MANAGE RESULTS & RESPOND
07 Manage the entire process of exceptions generated from analytic monitoring and from the generation of questionnaires and certifications.
REPORT RESULTS & UPDATE ASSESSMENTS
08 Use the results of monitoring and exception management to produce risk assessments and trends.
09 Identify new and changing regulations as they occur and update repositories and control and compliance procedures.
10 Report on the current status of compliance management activities from high- to low-detail levels.
IMPROVE THE PROCESS
11 Identify duplicate processes and fix procedures to combine and improve controls and compliance tests.
12 Integrate regulatory compliance risk management, monitoring, and reporting with overall risk management activities.
Eight compliance processes in desperate need of technology
01 Centralize regulations & compliance requirements
A major part of regulatory compliance management is staying on top of countless regulations and all their details. A solid content repository includes not only the regulations themselves, but also related data. By centralizing your regulations and compliance requirements, you’ll be able to start classifying them, so you can eventually search regulations and requirements by type, region of applicability, effective dates, and modification dates.
02 Map to risks, policies, & controls
Classifying regulatory requirements is no good on its own. They need to be connected to risk management, control and compliance processes, and system functionality. This is the most critical part of a compliance management system.
Typically, in order to do this mapping, you need:
An assessment of non-compliant risks for each requirement.
Defined processes for how each requirement is met.
Defined controls that make sure the compliance process is effective in reducing non-compliance risks.
Controls mapped to specific analytics monitoring tests that confirm the effectiveness on an ongoing basis.
Assigned owners for each mapped requirement. Specific processes and controls may be assigned to sub-owners.
03 Connect to data & use advanced analytics
Using different automated tests to access and analyze data is foundational to a data-driven compliance management approach.
The range of data sources and data types needed to perform compliance monitoring can be humongous. When it comes to areas like FCPA or other anti-bribery and corruption regulations, you might need to access entire populations of purchase and payment transactions, general ledger entries, payroll, and travel and entertainment expenses. And that’s just the internal sources. External sources could include things like the Politically Exposed Persons database or Sanctions Checks.
Extensive suites of tests and analyses can be run against the data to determine whether compliance controls are working effectively and if there are any indications of transactions or activities that fail to comply with regulations. The results of these analyses identify specific anomalies and control exceptions, as well as provide statistical data and trend reports that indicate changes in compliance risk levels.
Truly delivering on this step involves using the right technology since the requirements for accessing and analyzing data for compliance are demanding. Generalized analytic software is seldom able to provide more than basic capabilities, which are far removed from the functionality of specialized risk and control monitoring technologies.
04 Monitor incidents & manage issues
It’s important to quickly and efficiently manage instances once they’re flagged. But systems that create huge amounts of “false positives” or “false negatives” can end up wasting a lot of time and resources. On the other hand, a system that fails to detect high risk activities creates risk of major financial and reputational damage. The monitoring technology you choose should let you fine-tune analytics to flag actual risks and compliance failures and minimize false alarms.
The system should also allow for an issues resolution process that’s timely and maintains the integrity of responses. If the people responsible for resolving a flagged issue don’t do it adequately, an automated workflow should escalate the issues to the next level.
Older software can’t meet the huge range of incident monitoring and issues management requirements. Or it can require a lot of effort and expense to modify the procedures when needed.
05 Manage investigations
As exceptions and incidents are identified, some turn into issues that need in-depth investigation. Software helps this investigation process by allowing the user to document and log activities. It should also support easy collaboration of anyone involved in the investigation process.
Effective security must be in place around access to all aspects of a compliance management system. But it’s extra important to have a high level of security and privacy for the investigation management process.
06 Use surveys, questionnaires & certifications
Going beyond just transactional analysis and monitoring, it’s also important to understand what’s actually happening right now, by collecting the input of those working in the front-lines.
Software that has built-in automated surveys and questionnaires can gather large amounts of current information directly from these individuals in different compliance roles, then quickly interpret the responses.
For example, if you’re required to comply with the Sarbanes-Oxley Act (SOX), you can use automated questionnaires and certifications to collect individual sign-off on SOX control effectiveness questions. That information is consolidated and used to support the SOX certification process far more efficiently than using traditional ways of collecting sign-off.
07 Manage regulatory changes
Regulations change constantly, and to remain compliant, you need to know—quickly— when those changes happen. This is because changes can often mean modifications to your established procedures or controls, and that could impact your entire compliance management process.
A good compliance software system is built to withstand these revisions. It allows for easy updates to existing definitions of controls, processes, and monitoring activities.
Before software, any regulatory changes would involve huge amounts of manual activities, causing backlogs and delays. Now much (if not most) of the regulatory change process can be automated, freeing your time to manage your part of the overall compliance program.
08 Ensure regulatory examination & oversight
No one likes going through compliance reviews by regulatory bodies. It’s even worse if failures or weaknesses surface during the examination.
But if that happens to you, it’s good to know that many regulatory authorities have proven to be more accommodating and (dare we say) lenient when your compliance process is strategic, deliberate, and well designed.
There are huge benefits, in terms of efficiency and cost savings, by using a structured and well-managed regulatory compliance system. But the greatest economic benefit happens when you can avoid a potentially major financial penalty as a result of replacing an inherently unreliable and complicated legacy system with one that’s purpose-built and data-driven.
The unexpected COVID-19 virus outbreak led European countries to shut down major part of their economies aiming at containing the outbreak. Financial markets experienced huge losses and flight-to-quality investment behaviour. Governments and central banks committed to the provision of significant emergency packages to support the economy, as the economic shock, caused by demand and supply disruptions accompanied by its reflection to the financial markets, is expected to challenge economic growth, labour market and the consumer sentiment across Europe for an uncertain period of time.
Amid an unprecedented downward shift of interest rate curves during March, reflecting the flight-to-quality behaviour, credit spreads of corporates and sovereigns increased for riskier assets, leading effectively to a double-hit scenario. Equity markets dramatically dropped showing extreme levels of volatility responding to the uncertainties on virus effects and on the status of government and central banks support programs and their effectiveness. Despite the stressed market environment, there were signs of improvement following the announcements of the support packages and during the course of the initiatives of gradually reopening the economies. The virus outbreak also led to extraordinary working conditions, with part of the services sector working from home, which rises the potential of those conditions being preserved after the virus outbreak, which could decrease demand and market value for commercial real estate investments.
Within this challenging environment, insurers are exposed in terms of solvency risk, profitability risk and reinvestment risk. The sudden reassessment of risk premia and the increase of default risk could trigger large-scale rating downgrades and result in decreased investments’ value for insurers and IORPs, especially for exposures to highly indebted corporates and sovereigns. On the other hand, the risk of ultra-low interest rates for long has further increased. Factoring in the knock on effects of the weakening macro economy, future own funds position of the insurers could be further challenged, due to potential lower levels of profitable new business written accompanied by increased volume of profitable in-force policies being surrendered or lapsed.
Finally, liquidity risk has resurfaced, due to the potential of mass lapse type of events and higher than expected virus and litigation related claims accompanied by the decreased inflows of premiums.
For the European occupational pension sector, the negative impact of COVID-19 on the asset side is mainly driven by deteriorating equity market prices, as, in a number of Member States, IORPs allocate significant proportions of the asset portfolio (up to nearly 60%) in equity investments. However, the investment allocation is highly divergent amongst Member States, so that IORPs in other Member States hold up to 70% of their investments in bonds, mostly sovereign bonds, where the widening of credit spreads impair their market value. The liability side is already pressured due to low interest rates and, where market-consistent valuation is applied, due to low discount rates. The funding and solvency ratios of IORPs are determined by national law and, as could be seen in the 2019 IORP stress test results, have been under pressure and are certainly negatively impacted by this crisis. The current situation may lead to benefit cuts for members and may require sponsoring undertakings to finance funding gaps, which may lead to additional pressure on the real economy and on entities sponsoring an IORP.
Climate risks remain one of the focal points for the insurance and pension industry, with Environmental, Social and Governance (ESG) factors increasingly shaping investment decisions of insurers and pension funds but also affecting their underwriting. In response to climate related risks, the EU presented in mid-December the European Green Deal, a roadmap for making the EU climate neutral by 2050, providing actions meant to boost the efficient use of resources by
moving to a clean, circular economy and stop climate change,
revert biodiversity loss
and cut pollution.
At the same time, natural catastrophe related losses were milder than previous year, but asymmetrically shifted towards poorer countries lacking relevant insurance coverages.
Cyber risks have become increasingly relevant across the financial system in particular during the virus outbreak due to the new working conditions that the confinement measures imposed. Amid the extraordinary en masse remote working arrangements an increased number of cyber-attacks has been reported on both individuals and healthcare systems. With increasing attention for cyber risks both at national and European level, EIOPA contributed to building a strong, reliable, cyber insurance market by publishing its strategy for cyber underwriting and has also been actively involved in promoting cyber resilience in the insurance and pensions sectors.
Turning a Regulatory Requirement Into Competitive Advantage
Mandated enterprise stress testing – the primary macro-prudential tool that emerged from the 2008 ﬁnancial crisis – helps regulators address concerns about the state of the banking industry and its impact on the local and global ﬁnancial system. These regulatory stress tests typically focus on the largest banking institutions and involve a limited set of prescribed downturn scenarios.
Regulatory stress testing requires a signiﬁcant investment by ﬁnancial institutions – in technology, skilled people and time. And the stress testing process continues to become even more complex as programs mature and regulatory expectations keep growing.
The question is, what’s the best way to go about stress testing, and what other beneﬁts can banks realize from this investment? Equally important, should you view stress testing primarily as a regulatory compliance tool? Or can banks harness it as a management tool that links corporate planning and risk appetite – and democratizes scenariobased analysis across the institution for faster, better business decisions?
These are important questions for every bank executive and risk officer to answer because justifying large ﬁnancial investments in people and technology solely to comply with periodic regulatory requirements can be difficult. Not that noncompliance is ever an option; failure can result in severe damage to reputation and investor conﬁdence.
But savvy ﬁnancial institutions are looking for – and realizing – a signiﬁcant return on investment by reaching beyond simple compliance. They are seeing more effective, consistent analytical processes and the ability to address complex questions from senior management (e.g., the sensitivity of ﬁnancial performance to changes in macroeconomic factors). Their successes provide a road map for those who are starting to build – or are rethinking their approach to – their stress testing infrastructure.
This article reviews the maturation of regulatory stress test regimes and explores diverse use cases where stress testing (or, more broadly, scenario-based analysis) may provide value beyond regulatory stress testing.
Comprehensive Capital Assessments: A Daunting Exercise
The regulatory stress test framework that emerged following the 2008 financial crisis – that banks perform capital adequacy-oriented stress testing over a multiperiod forecast horizon – is summarized in Figure 1. At each period, a scenario exerts its impact on the net profit or loss based on the
including portfolio balances,
and operational income and costs.
The net profit or loss, after being adjusted by other financial obligations and management actions, will determine the capital that is available for the next period on the scenario path.
Note that the natural evolution of the portfolio and business under a given scenario leads to a state of the business at the next horizon, which then starts a new evaluation of the available capital. The risk profile of this business evaluation also determines the capital requirement under the same scenario. The capital adequacy assessment can be performed through this dynamic analysis of capital supply and demand.
This comprehensive capital assessment requires cooperation from various groups across business and finance in an institution. But it becomes a daunting exercise on a multiperiod scenario because of the forward-looking and path-dependent nature of the analysis. For this reason, some jurisdictions began the exercise with only one horizon. Over time, these requirements have been revised to cover at least two horizons, which allows banks to build more realistic business dynamics into their analysis.
Maturing and Optimizing Regulatory Stress Testing
Stress testing – now a standard supervisory tool – has greatly improved banking sector resilience. In regions where stress testing capabilities are more mature, banks have built up adequate capital and have performed well in recent years. For example, the board of governors for both the US Federal Reserve System and Bank of England announced good results for their recent stress tests on large banks.
As these programs mature, many jurisdictions are raising their requirements, both quantitively and qualitatively. For example:
US CCAR and Bank of England stress tests now require banks to carry out tests on institution-speciﬁc scenarios, in addition to prescribed regulatory scenarios.
The regions adopting IFRS 9, including the EU, Canada and the UK, are now required to incorporate IFRS 9 estimates into regulatory stress tests. Likewise, banks subject to stress testing in the US will need to incorporate CECL estimates into their capital adequacy tests.
Liquidity risk has been incorporated into stress tests – especially as part of resolution and recovery planning – in regions like the US and UK.
Jurisdictions in Asia (such as Taiwan) have extended the forecast horizons for their regulatory stress tests.
In addition, stress testing and scenario analysis are now part of Pillar 2 in the Internal Capital Adequacy Assessment Process (ICAAP) published by the Basel Committee on Banking Supervision. Institutions are expected to use stress tests and scenario analyses to improve their understanding of the vulnerabilities that they face under a wide range of adverse conditions. Further uses of regulatory stress testing include the scenariobased analysis for Interest Rate Risk in the Banking Book (IRRBB).
Finally, the goal of regulatory stress testing is increasingly extending beyond completing a simple assessment. Management must prepare a viable mitigation plan should an adverse condition occur. Some regions also require companies to develop “living wills” to ensure the orderly wind-down of institutions and to prevent systemic contagion from an institutional failure.
All of these demands will require the adoption of new technologies and best practices.
Exploring Enhanced Use Cases for Stress Testing Capabilities
As noted by the Basel Committee on Banking Supervision in its 2018 publication Stress Testing Principles, “Stress testing is now a critical element of risk management for banks and a core tool for banking supervisors and macroprudential authorities.” As stress testing capabilities have matured, people are exploring how to use these capabilities for strategic business purposes – for example, to perform “internal stress testing.”
The term “internal stress testing” can seem ambiguous. Some stakeholders don’t understand the various use cases for applying scenario-based analyses beyond regulatory stress testing or doubt the strategic value to internal management and planning. Others think that developing a scenario-based analytics infrastructure that is useful across the enterprise is just too difficult or costly.
But there are, in fact, many high-impact strategic use cases for stress testing across the enterprise, including:
Risk appetite management.
What-if and sensitivity analysis.
Emerging risk identiﬁcation.
Reverse stress testing.
Stress testing is one form of scenario-based analysis. But scenario-based analysis is also useful for forward-looking ﬁnancial planning exercises on several fronts:
The development of business plans and management actions are already required as part of regulatory stress testing, so it’s natural to align these processes with internal planning and strategic management.
Scenario-based analyses lay the foundation for assessing and communicating the impacts of changing environmental factors and portfolio shifts on the institution’s ﬁnancial performance.
At a more advanced level, banks can incorporate scenario-based planning with optimization techniques to ﬁnd an optimal portfolio strategy that performs robustly across a range of scenarios.
Here, banks can leverage the technologies and processes used for regulatory stress testing. However, both the infrastructure and program processes must be developed with ﬂexibility in mind – so that both business-as-usual scenarios and alternatives can be easily managed, and the models and assumptions can be adjusted.
Risk Appetite Management
A closely related topic to stress testing and capital planning is risk appetite. Risk appetite deﬁnes the level of risk an institution is willing to take to achieve its ﬁnancial objectives. According to Senior Supervisors Group (2008), a clearly articulated risk appetite helps ﬁnancial institutions properly understand, monitor, and communicate risks internally and externally.
Figure 2 illustrates the dynamic relationship between stress testing, risk appetite and capital planning. Note that:
Risk appetite is deﬁned by the institution to reﬂect its capital strategy, return targets and its tolerance for risk.
Capital planning is conducted in alignment with the stated risk appetite and risk policy.
Scenario-based analyses are then carried out to ensure the bank can operate within the risk appetite under a range of scenarios (i.e., planning, baseline and stressed).
Any breach of the stated risk appetite observed in these analyses leads to management action. These actions may include, but are not limited to,
enforcement or reallocation of risk limits,
revisions to capital planning
or adjustments to current risk appetite levels.
What-If and Sensitivity Analysis
Faster, richer what-if analysis is perhaps the most powerful – and demanding – way to extend a bank’s stress testing utility. What-if analyses are often initiated from ad hoc requests made by management seeking timely insight to guide decisions. Narratives for these scenarios may be driven by recent news topics or unfolding economic events.
An anecdotal example illustrates the business value of this type of analysis. Two years ago, a chief risk officer at one of the largest banks in the United States was at a dinner event and heard concerns about Chinese real estate and a potential market crash. He quickly asked his stress testing team to assess the impact on the bank if such an event occurred. His team was able to report back within a week. Fortunately, the result was not bad – news that was a relief to the CRO.
The responsiveness exhibited by this CRO’s stress testing team is impressive. But speed alone is not enough. To really get value from what-if analysis, banks must also conduct it with a reasonable level of detail and sophistication. For this reason, banks must design their stress test infrastructure to balance comprehensiveness and performance. Otherwise, its value will be limited.
Sensitivity analysis usually supplements stress testing. It differs from other scenariobased analyses in that the scenarios typically lack a narrative around them. Instead, they are usually deﬁned parametrically to answer questions about scenario, assumption and model deviations.
Sensitivity analysis can answer questions such as:
Which economic factors are the most signiﬁcant for future portfolio performance?
What level of uncertainty results from incremental changes to inputs and assumptions?
What portfolio concentrations are most sensitive to model inputs?
For modeling purposes, sensitivity tests can be viewed as an expanded set of scenario analyses. Thus, if banks perform sensitivity tests, they must be able to scale their infrastructure to complete a large number of tests within a reasonable time frame and must be able to easily compare the results.
Emerging Risk Identiﬁcation
Econometric-based stress testing of portfolio-level credit, market, interest rate and liquidity risks is now a relatively established practice. But measuring the impacts from other risks, such as reputation and strategic risk, is not trivial. Scenario-based analysis provides a viable solution, though it requires proper translation from the scenarios involving these risks into a scenario that can be modeled. This process often opens a rich dialogue across the institution, leading to a beneﬁcial consideration of potential business impacts.
Reverse Stress Testing
To enhance the relevance of the scenarios applied in stress testing analyses, many regulators have required banks to conduct reverse stress tests. For reverse stress tests, institutions must determine the risk factors that have a high impact on their business and determine scenarios that result in the breaching thresholds of speciﬁc output metrics (e.g., total capital ratio).
There are multiple approaches to reverse stress testing. Skoglund and Chen proposed a method leveraging risk information measures to decompose the risk factor impact from simulations and apply the results for stress testing. Chen and Skoglund also explained how stress testing and simulation can leverage each other for risk analyses.
Assessing the Impacts of COVID-19
The worldwide spread of COVID-19 in 2020 has presented a sudden shock to the ﬁnancial plans of lending institutions. Both the spread of the virus and the global response to it are highly dynamic. Bank leaders, seeking a timely understanding of the potential ﬁnancial impacts, have increasingly turned to scenario analysis. But, to be meaningful, the process must:
Scale to an increasing array of input scenarios as the situation continues to develop.
Provide a controlled process to perform and summarize numerous iterations of analysis.
Provide understandable and explainable results in a timely fashion.
Provide process transparency and control for qualitative and quantitative assumptions.
Maintain detailed data to support ad hoc reporting and concentration analysis.
Banks able to conduct rapid ad hoc analysis can respond more conﬁdently and provide a data-driven basis for the actions they take as the crisis unfolds.
Regulatory stress testing has become a primary tool for bank supervision, and ﬁnancial institutions have dedicated signiﬁcant time and resources to comply with their regional mandates. However, the beneﬁts of scenario-based analysis reach beyond such rote compliance.
Leading banks are ﬁnding they can expand the utility of their stress test programs to
enhance their understanding of portfolio dynamics,
improve their planning processes
and better prepare for future crises.
Through increased automation, institutions can
explore a greater range of scenarios,
reduce processing time and effort,
and support the increased ﬂexibility required for strategic scenario-based analysis.
Armed with these capabilities, institutions can improve their ﬁnancial performance and successfully weather downturns by making better, data-driven decisions.