Stress Testing 2.0: Better Informed Decisions Through Expanded Scenario-Based Risk Management

Turning a Regulatory Requirement Into Competitive Advantage

Mandated enterprise stress testing – the primary macro-prudential tool that emerged from the 2008 financial crisis – helps regulators address concerns about the state of the banking industry and its impact on the local and global financial system. These regulatory stress tests typically focus on the largest banking institutions and involve a limited set of prescribed downturn scenarios.

Regulatory stress testing requires a significant investment by financial institutions – in technology, skilled people and time. And the stress testing process continues to become even more complex as programs mature and regulatory expectations keep growing.

The question is, what’s the best way to go about stress testing, and what other benefits can banks realize from this investment? Equally important, should you view stress testing primarily as a regulatory compliance tool? Or can banks harness it as a management tool that links corporate planning and risk appetite – and democratizes scenariobased analysis across the institution for faster, better business decisions?

These are important questions for every bank executive and risk officer to answer because justifying large financial investments in people and technology solely to comply with periodic regulatory requirements can be difficult. Not that noncompliance is ever an option; failure can result in severe damage to reputation and investor confidence.

But savvy financial institutions are looking for – and realizing – a significant return on investment by reaching beyond simple compliance. They are seeing more effective, consistent analytical processes and the ability to address complex questions from senior management (e.g., the sensitivity of financial performance to changes in macroeconomic factors). Their successes provide a road map for those who are starting to build – or are rethinking their approach to – their stress testing infrastructure.

This article reviews the maturation of regulatory stress test regimes and explores diverse use cases where stress testing (or, more broadly, scenario-based analysis) may provide value beyond regulatory stress testing.

Comprehensive Capital Assessments: A Daunting Exercise

The regulatory stress test framework that emerged following the 2008 financial crisis – that banks perform capital adequacy-oriented stress testing over a multiperiod forecast horizon – is summarized in Figure 1. At each period, a scenario exerts its impact on the net profit or loss based on the

  • as-of-date business,
  • including portfolio balances,
  • exposures,
  • and operational income and costs.

The net profit or loss, after being adjusted by other financial obligations and management actions, will determine the capital that is available for the next period on the scenario path.

SAS1

Note that the natural evolution of the portfolio and business under a given scenario leads to a state of the business at the next horizon, which then starts a new evaluation of the available capital. The risk profile of this business evaluation also determines the capital requirement under the same scenario. The capital adequacy assessment can be performed through this dynamic analysis of capital supply and demand.

This comprehensive capital assessment requires cooperation from various groups across business and finance in an institution. But it becomes a daunting exercise on a multiperiod scenario because of the forward-looking and path-dependent nature of the analysis. For this reason, some jurisdictions began the exercise with only one horizon. Over time, these requirements have been revised to cover at least two horizons, which allows banks to build more realistic business dynamics into their analysis.

Maturing and Optimizing Regulatory Stress Testing

Stress testing – now a standard supervisory tool – has greatly improved banking sector resilience. In regions where stress testing capabilities are more mature, banks have built up adequate capital and have performed well in recent years. For example, the board of governors for both the US Federal Reserve System and Bank of England announced good results for their recent stress tests on large banks.

As these programs mature, many jurisdictions are raising their requirements, both quantitively and qualitatively. For example:

  • US CCAR and Bank of England stress tests now require banks to carry out tests on institution-specific scenarios, in addition to prescribed regulatory scenarios.
  • The regions adopting IFRS 9, including the EU, Canada and the UK, are now required to incorporate IFRS 9 estimates into regulatory stress tests. Likewise, banks subject to stress testing in the US will need to incorporate CECL estimates into their capital adequacy tests.
  • Liquidity risk has been incorporated into stress tests – especially as part of resolution and recovery planning – in regions like the US and UK.
  • Jurisdictions in Asia (such as Taiwan) have extended the forecast horizons for their regulatory stress tests.

In addition, stress testing and scenario analysis are now part of Pillar 2 in the Internal Capital Adequacy Assessment Process (ICAAP) published by the Basel Committee on Banking Supervision. Institutions are expected to use stress tests and scenario analyses to improve their understanding of the vulnerabilities that they face under a wide range of adverse conditions. Further uses of regulatory stress testing include the scenariobased analysis for Interest Rate Risk in the Banking Book (IRRBB).

Finally, the goal of regulatory stress testing is increasingly extending beyond completing a simple assessment. Management must prepare a viable mitigation plan should an adverse condition occur. Some regions also require companies to develop “living wills” to ensure the orderly wind-down of institutions and to prevent systemic contagion from an institutional failure.

All of these demands will require the adoption of new technologies and best practices.

Exploring Enhanced Use Cases for Stress Testing Capabilities

As noted by the Basel Committee on Banking Supervision in its 2018 publication Stress Testing Principles, “Stress testing is now a critical element of risk management for banks and a core tool for banking supervisors and macroprudential authorities.” As stress testing capabilities have matured, people are exploring how to use these capabilities for strategic business purposes – for example, to perform “internal stress testing.”

The term “internal stress testing” can seem ambiguous. Some stakeholders don’t understand the various use cases for applying scenario-based analyses beyond regulatory stress testing or doubt the strategic value to internal management and planning. Others think that developing a scenario-based analytics infrastructure that is useful across the enterprise is just too difficult or costly.

But there are, in fact, many high-impact strategic use cases for stress testing across the enterprise, including:

  1. Financial planning.
  2. Risk appetite management.
  3. What-if and sensitivity analysis.
  4. Emerging risk identification.
  5. Reverse stress testing.

Financial Planning

Stress testing is one form of scenario-based analysis. But scenario-based analysis is also useful for forward-looking financial planning exercises on several fronts:

  • The development of business plans and management actions are already required as part of regulatory stress testing, so it’s natural to align these processes with internal planning and strategic management.
  • Scenario-based analyses lay the foundation for assessing and communicating the impacts of changing environmental factors and portfolio shifts on the institution’s financial performance.
  • At a more advanced level, banks can incorporate scenario-based planning with optimization techniques to find an optimal portfolio strategy that performs robustly across a range of scenarios.

Here, banks can leverage the technologies and processes used for regulatory stress testing. However, both the infrastructure and program processes must be developed with flexibility in mind – so that both business-as-usual scenarios and alternatives can be easily managed, and the models and assumptions can be adjusted.

Risk Appetite Management

A closely related topic to stress testing and capital planning is risk appetite. Risk appetite defines the level of risk an institution is willing to take to achieve its financial objectives. According to Senior Supervisors Group (2008), a clearly articulated risk appetite helps financial institutions properly understand, monitor, and communicate risks internally and externally.

Figure 2 illustrates the dynamic relationship between stress testing, risk appetite and capital planning. Note that:

  • Risk appetite is defined by the institution to reflect its capital strategy, return targets and its tolerance for risk.
  • Capital planning is conducted in alignment with the stated risk appetite and risk policy.
  • Scenario-based analyses are then carried out to ensure the bank can operate within the risk appetite under a range of scenarios (i.e., planning, baseline and stressed).

SAS2

Any breach of the stated risk appetite observed in these analyses leads to management action. These actions may include, but are not limited to,

  • enforcement or reallocation of risk limits,
  • revisions to capital planning
  • or adjustments to current risk appetite levels.

What-If and Sensitivity Analysis

Faster, richer what-if analysis is perhaps the most powerful – and demanding – way to extend a bank’s stress testing utility. What-if analyses are often initiated from ad hoc requests made by management seeking timely insight to guide decisions. Narratives for these scenarios may be driven by recent news topics or unfolding economic events.

An anecdotal example illustrates the business value of this type of analysis. Two years ago, a chief risk officer at one of the largest banks in the United States was at a dinner event and heard concerns about Chinese real estate and a potential market crash. He quickly asked his stress testing team to assess the impact on the bank if such an event occurred. His team was able to report back within a week. Fortunately, the result was not bad – news that was a relief to the CRO.

The responsiveness exhibited by this CRO’s stress testing team is impressive. But speed alone is not enough. To really get value from what-if analysis, banks must also conduct it with a reasonable level of detail and sophistication. For this reason, banks must design their stress test infrastructure to balance comprehensiveness and performance. Otherwise, its value will be limited.

Sensitivity analysis usually supplements stress testing. It differs from other scenariobased analyses in that the scenarios typically lack a narrative around them. Instead, they are usually defined parametrically to answer questions about scenario, assumption and model deviations.

Sensitivity analysis can answer questions such as:

  • Which economic factors are the most significant for future portfolio performance?
  • What level of uncertainty results from incremental changes to inputs and assumptions?
  • What portfolio concentrations are most sensitive to model inputs?

For modeling purposes, sensitivity tests can be viewed as an expanded set of scenario analyses. Thus, if banks perform sensitivity tests, they must be able to scale their infrastructure to complete a large number of tests within a reasonable time frame and must be able to easily compare the results.

Emerging Risk Identification

Econometric-based stress testing of portfolio-level credit, market, interest rate and liquidity risks is now a relatively established practice. But measuring the impacts from other risks, such as reputation and strategic risk, is not trivial. Scenario-based analysis provides a viable solution, though it requires proper translation from the scenarios involving these risks into a scenario that can be modeled. This process often opens a rich dialogue across the institution, leading to a beneficial consideration of potential business impacts.

Reverse Stress Testing

To enhance the relevance of the scenarios applied in stress testing analyses, many regulators have required banks to conduct reverse stress tests. For reverse stress tests, institutions must determine the risk factors that have a high impact on their business and determine scenarios that result in the breaching thresholds of specific output metrics (e.g., total capital ratio).

There are multiple approaches to reverse stress testing. Skoglund and Chen proposed a method leveraging risk information measures to decompose the risk factor impact from simulations and apply the results for stress testing. Chen and Skoglund also explained how stress testing and simulation can leverage each other for risk analyses.

Assessing the Impacts of COVID-19

The worldwide spread of COVID-19 in 2020 has presented a sudden shock to the financial plans of lending institutions. Both the spread of the virus and the global response to it are highly dynamic. Bank leaders, seeking a timely understanding of the potential financial impacts, have increasingly turned to scenario analysis. But, to be meaningful, the process must:

  • Scale to an increasing array of input scenarios as the situation continues to develop.
  • Provide a controlled process to perform and summarize numerous iterations of analysis.
  • Provide understandable and explainable results in a timely fashion.
  • Provide process transparency and control for qualitative and quantitative assumptions.
  • Maintain detailed data to support ad hoc reporting and concentration analysis.

Banks able to conduct rapid ad hoc analysis can respond more confidently and provide a data-driven basis for the actions they take as the crisis unfolds.

Conclusion

Regulatory stress testing has become a primary tool for bank supervision, and financial institutions have dedicated significant time and resources to comply with their regional mandates. However, the benefits of scenario-based analysis reach beyond such rote compliance.

Leading banks are finding they can expand the utility of their stress test programs to

  • enhance their understanding of portfolio dynamics,
  • improve their planning processes
  • and better prepare for future crises.

Through increased automation, institutions can

  • explore a greater range of scenarios,
  • reduce processing time and effort,
  • and support the increased flexibility required for strategic scenario-based analysis.

Armed with these capabilities, institutions can improve their financial performance and successfully weather downturns by making better, data-driven decisions.

Click here to access SAS’ latest Whitepaper

Implementing combined audit assurance

ASSESS IMPACT & CREATE AN ASSURANCE MAP

The audit impact assessment and assurance map are interdependent—and the best possible starting point for your combined assurance journey. An impact assessment begins with a critical look at the current or “as is” state of your organization. As you review your current state, you build out your assurance map with your findings. You can’t really do one without the other. The map, then, will reveal any overlaps and gaps, and provide insight into the resources, time, and costs you might require during your implementation. Looking at an assurance map example will give you a better idea of what we’re talking about. The Institute of Chartered Accountants of England and Wales (ICAEW) has an excellent template.

Galv4

The ICAEW has also provided a guide to building a sound assurance map. The institute suggests you take the following steps:

  1. Identify your sponsor (the main user/senior staff member who will act as a champion).
  2. Determine your scope (identify elements that need assurance, like operational/ business processes, board-level risks, governance, and compliance).
  3. Assess the required amount of assurance for each element (understand what the required or desired amount of assurance is across aspects of the organization).
  4. Identify and list your assurance providers in each line of defense (e.g., audit committee or risk committee in the third line).
  5. Identify your assurance activities (compile and review relevant documentation, select and interview area leads, collate and assess assurance provider information).
  6. Reassess your scope (revisit and update your map scope, based on the information you have gathered/evaluated to date).
  7. Assess the quality of your assurance activities (look at breadth and depth of scope, assurance provider competence, how often activities are reviewed, and the strengths/quality of assurance delivered by each line of defense).
  8. Assess the aggregate actual amount of assurance for each element (the total amount of assurance needs to be assessed, collating all the assurance being provided by each line of defense).
  9. Identify the gaps and overlaps in assurance for each element (compare the actual amount of assurance with the desired amount to determine if there are gaps or overlaps).
  10. Determine your course of action (make recommendations for the actions to be taken/activities to be performed moving forward).

Just based on the steps above, you could understand how your desired state evolves by the time you reach step 10. Ideally, by this point, gaps and overlaps have been eliminated. But the steps we just reviewed don’t cover the frequency of each review and they don’t determine costs. So we’ve decided to add a few more steps to round it out:

  1. Assess the frequency of each assurance activity.
  2. Identify total cost for all the assurance activities in the current state.
  3. Identify the total cost for combined assurance (i.e., when gaps and overlaps have been addressed, and any consequent benefits or cost savings).

DEFINE THE RISKS OF IMPLEMENTATION

Implementing combined assurance is a project, and like any project, there’s a chance it can go sideways and fail, losing you both time and money. So, just like anything else in business, you need to take a risk-based approach. As part of this stage, you’ll want to clearly define the risks of implementing a combined assurance program, and add these risks, along with a mitigation plan and the expected benefits, to your tool kit. As long as the projected benefits of the project outweigh the residual risks and costs, the implementation program is worth pursuing. You’ll need to be able to demonstrate that a little further down the process.

DEFINE RESOURCES & DELIVERABLES

Whoever will own the project of implementing combined assurance will no doubt need dedicated resources in order to execute. So, who do we bring in? On first thought, the internal audit team looks best suited to drive the program forward. But, during the implementation phase, you’ll actually want a cross-functional team of people from internal control, risk, and IT, to work alongside internal audit. So, when you’re considering resourcing, think about each and every team this project touches. Now you know who’s going to do the work, you’ll want to define what they’re doing (key milestones) and when it will be delivered (time frame). And finally, define the actual benefits, as well as the tangible deliverables/outcomes of implementing combined assurance. (The table below provides some examples, but each organization will be unique.)

Galv1

RAISE AWARENESS & GET MANAGEMENT COMMITMENT

Congratulations! You’re now armed with a fancy color-coded impact assessment, and a full list of risks, resources, and deliverables. The next step is to clearly communicate and share the driving factors behind your combined assurance initiative. If you want them to support and champion your efforts, top management will need to be able to quickly take in and understand the rationale behind your desire for combined assurance. Critical output: You’ll want to create a presentation kit of sorts, including the assurance map, lists of risks, resources, and deliverables, a cost/benefit analysis, and any supporting research or frameworks (e.g., the King IV Report, FRC Corporate Governance Code, available industry analysis, and case studies). Chances are, you’ll be presenting this concept more than once, so if you can gather and organize everything in a single spot, that will save a lot of headaches down the track.

ASSIGN ACCOUNTABILITY

When we ask the question, “Who owns the implementation of combined assurance?”, we need to consider two main things:

  • Who would be most impacted if combined assurance were implemented?
  • Who would be senior enough to work across teams to actually get the job done?

It’s evident that a board/C-level executive should lead the project. This project will be spanning multiple departments and require buy-in from many people—so you need someone who can influence and convince. Therefore, we feel that the chief audit executive (CAE) and/or the chief revenue officer (CRO) should be accountable for implementing combined assurance. The CAE literally stands at the intersection of internal and external assurance. Where reliance is placed on the work of others, the CAE is still accountable and responsible for ensuring adequate support for conclusions and opinions reached by the internal audit activity. And the CRO is taking a more active interest in assurance maps as they become increasingly more risk-focused. The Institute of Internal Auditors (IIA), Standard 2050, also assigns accountability to the CAE, stating: “The chief audit executive should share information and coordinate activities with other internal and external assurance providers and consulting services to ensure proper coverage and minimize duplication of effort.” So, not only is the CAE at the intersection of assurance, they’re also directing traffic—exactly the combination we need to drive implementation.

Envisioning the solution

You’ve summarized the current/“as is” state in your assurance map. Now it’s time to move into a future state of mind and envision your desired state. What does your combined assurance solution look like? And, more critically, how will you create it? This stage involves more assessment work. Only now you’ll be digging into the maturity levels of your organization’s risk management and internal audit process, as well as the capabilities and maturity of your Three Lines of Defense. This is where you answer the questions, “What do I want?”, and “Is it even feasible?” Some make-or-break capability factors for implementing combined assurance include:

  1. Corporate risk culture Risk culture and risk appetite shape an organization’s decision-making, and that culture is reflected at every level. Organizations who are more risk-averse tend to be unwilling to make quick decisions without evidence and data. On the other hand, risk-tolerant organizations take more risks, make rapid decisions, and pivot quickly, often without performing due diligence. How will your risk culture shape your combined assurance program?
  2. Risk management awareness If employees don’t know—and don’t prioritize— how risk can and should be managed in your organization, your implementation program will fail. Assurance is very closely tied to risk, so it’s important to communicate constantly and make people aware that risk at every level must be adequately managed.
  3. Risk management processes We just stated that risk and assurance are tightly coupled, so it makes sense that the more mature your risk management processes are, the easier it will be to implement combined assurance. Mature risk management means you’ve got processes defined, documented, running, and refined. For the lucky few who have all of these things, you’re going to have a much easier time compared to those who don’t.
  4. Risk & controls taxonomy Without question, you will require a common risk and compliance language. We can’t have people making up names for tools, referring to processes in different ways, or worst of all, reporting on totally random KPIs. The result of combined assurance should be “one language, one voice, one view” of the risks and issues across the organization.
  5. System & process integrations An integrated system where there is one set of risks and one set of controls is key to delivering effective combined assurance. This includes: Risk registers across the organization, Controls across the organization Issues and audit findings, Reporting.
  6. Technology use Without dedicated software technology, it’s extremely difficult to provide a sustainable risk management system with sound processes, a single taxonomy, and integrated risks and controls. How technology is used in your organization will determine the sustainability of combined assurance. (If you already have a risk management and controls platform that has these integration capabilities, implementation will be easier.)
  7. Using assurance maps as monitoring tools Assurance maps aren’t just for envisioning end-states; they’re also critical monitoring tools that can feed data into your dashboard. They can inform your combined assurance dashboard, to help report on progress.
  8. Continuous improvement mechanisms A mature program will always have improvement mechanisms and feedback loops to incorporate user and stakeholder feedback. A lack of this feedback mechanism will impact the continued effectiveness of combined assurance.

We now assess the maturity of these factors (plus any others that you find relevant) and rank them on a scale of 1-4:

  • Level 1: Not achieved (0-15% of target).
  • Level 2: Partially achieved (15-50%).
  • Level 3: Largely achieved (50-85%).
  • Level 4: Achieved (85-100%).

This rating scale is based on the ISO/IEC 15504 that assigns a rating to the degree each objective (process capability) is achieved. An example of a combined assurance capability maturity assessment can be seen in Figure 2.

Galv2

GAP ANALYSIS

Once the desired levels for all of the factors are agreed on and endorsed by senior management, the next step is to undertake a gap analysis. The example in Figure 2 shows that the current overall maturity level is a 2 and the desired level is a 3 or 4 for each factor. The gap for each factor needs to be analyzed for the activities and resources required to bridge it. Then you can envision the solution and create a roadmap to bridge the gap(s).

SOLUTION VISION & ROADMAP

An example solution vision and roadmap could be:

  • We will use the same terminology and language for risk in all parts of the organization, and establish a single risk dictionary as a central repository.
  • All risks will be categorized according to severity and criticality and be mapped to assurance providers to ensure that no risk is assessed by more than one provider.
  • A rolling assurance plan will be prepared to ensure that risks are appropriately prioritized and reviewed at least once every two years.
  • An integrated, real-time report will be available on demand to show the status, frequency, and coverage of assurance activities.
  • The integrated report/assurance map will be shared with the board, audit committee, and risk committee regularly (e.g., quarterly or half-yearly).
  • To enable these capabilities, risk capture, storage, and reporting will be automated using an integrated software platform.

Figure 3 shows an example roadmap to achieve your desired maturity level.

Galv3

Click here to access Galvanize’s Risk Manangement White Paper

 

Fintech, regtech and the role of compliance in 2020

The ebb and flow of attitudes on the adoption and use of technology has evolving ramifications for financial services firms and their compliance functions, according to the findings of the Thomson Reuters Regulatory Intelligence’s fourth annual survey on fintech, regtech and the role of compliance. This year’s survey results represent the views and experiences of almost 400 compliance and risk practitioners worldwide.

During the lifetime of the report it has had nearly 2,000 responses and been downloaded nearly 10,000 times by firms, risk and compliance practitioners, regulators, consultancies, law firms and global systemically-important financial institutions (G-SIFIs). The report also highlights the shifting role of the regulator and concerns about best or better practice approaches to tackle the rise of cyber risk. The findings have become a trusted source of insight for firms, regulators and their advisers alike. They are intended to help regulated firms with planning, resourcing and direction, and to allow them to benchmark whether their resources, skills, strategy and expectations are in line with those of the wider industry. As with previous reports, regional and G-SIFI results are split out where they highlight any particular trend. One challenge for firms is the need to acquire the skill sets which are essential if they are to reap the expected benefits of technological solutions. Equally, regulators and policymakers need to have the appropriate up-todate skillsets to enable consistent oversight of the use of technology in financial services. Firms themselves, and G-SIFIs in particular, have made substantial investments in skills and the upgrading of legacy systems.

Key findings

  • The involvement of risk and compliance functions in their firm’s approach to fintech, regtech and insurtech continues to evolve. Some 65% of firms reported their risk and compliance function was either fully engaged and consulted or had some involvement (59% in prior year). In the G-SIFI population 69% reported at least some involvement with those reporting their compliance function as being fully engaged and consulted almost doubling from 13% in 2018, to 25% in 2019. There is an even more positive picture presented on increasing board involvement in the firm’s approach to fintech, regtech and insurtech. A total of 62% of firms reported their board being fully engaged and consulted or having some involvement, up from 54% in the prior year. For G-SIFIs 85% reported their board being fully engaged and consulted or having some involvement, up from 56% in the prior year. In particular, 37% of G-SIFIs reported their board was fully engaged with and consulted on the firm’s approach to fintech, regtech and insurtech, up from 13% in the prior year.
  • Opinion on technological innovation and digital disruption has fluctuated in the past couple of years. Overall, the level of positivity about fintech innovation and digital disruption has increased, after a slight dip in 2018. In 2019, 83% of firms have a positive view of fintech innovation (23% extremely positive, 60% mostly positive), compared with 74% in 2018 and 83% in 2017. In the G-SIFI population the positivity rises to 92%. There are regional variations, with the UK and Europe reporting a 97% positive view at one end going down to a 75% positive view in the United States.
  • There has been a similar ebb and flow of opinion about regtech innovation and digital disruption although at lower levels. A total of 77% reported either an extremely or mostly positive view, up from 71% in the prior year. For G-SIFIs 81% had a positive view, up from 76% in the prior year.
  • G-SIFIs have reported a significant investment in specialist skills for both risk and compliance functions and at board level. Some 21% of G-SIFIs reported they had invested in and/or appointed people with specialist skills to the board to accommodate developments in fintech, insurtech and regtech, up from 2% in the prior year. This means in turn 79% of G-SIFIs have not completed their work in this area, which is potentially disturbing. Similarly, 25% of G-SIFIs have invested in specialist skills for the risk and compliance functions, up from 9% in the prior year. In the wider population 10% reported investing in specialist skills at board level and 16% reported investing in specialist skills for the risk and compliance function. A quarter (26%) reported they have yet to invest in specialist skills for the risk and compliance function, but they know it is needed (32% for board-level specialist skills). Again, these figures suggest 75% of G-SIFIs have not fully upgraded their risk and compliance functions, rising to 84% in the wider population.
  • The greatest financial technology challenge firms expect to face in the next 12 months have changed in nature since the previous survey, with the top three challenges cited as keeping up with technological advancements; budgetary limitations, lack of investment and cost; and data security. In prior years, the biggest challenges related to the need to upgrade legacy systems and processes as well as budgetary limitations, the adequacy and availability of skilled resources together with the need for cyber resilience. In terms of the greatest benefits expected to be seen from financial technology in the next 12 months the top three are a strengthening of operational efficiency, improved services for customers and greater business opportunities.
  • G-SIFIs are leading the way on the implementation of regtech solutions. Some 14% of G-SIFIs have implemented a regtech solution, up from 9% in the prior year with 75% (52% in the prior year) reporting they have either fully or partially implemented a regtech solution to help manage compliance. In the wider population, 17% reported implementing a regtech solution, up from 8% in the prior year. The 2018 numbers overall showed a profound dip from 2017 when 29% of G-SIFIs and 30% of firms reported implementing a regtech solution, perhaps highlighting that early adoption of regtech solutions was less than smooth.
  • Where firms have not yet deployed fintech or regtech solutions various reasons were cited as to what was holding them back. Significantly, one third of firms cited lack of investment; a similar number of firms pointed to a lack of in-house skills and information security/data protection concerns. Some 14% of  firms and 12% of G-SIFIs reported they had taken a deliberate strategic decision not to deploy fintech or regtech solutions yet.
  • There continues to be substantial variation in the overall budget available for regtech solutions. A total of 38% of firms (31% in prior year) reported that the expected budget would grow in the coming year, however, 31% said they lack a budget for regtech (25% in the prior year). For G-SIFIs 48% expected the budget to grow (36% in prior year), with 12% reporting no budget for regtech solutions (6% in the prior year).

Focus : Challenges for firms

Technological challenges for firms come in all shapes and sizes. There is the potential, marketplace changing, challenge posed by the rise of bigtech. There is also the evolving approach of regulators and the need to invest in specialist skill sets. Lastly, there is the emerging need to keep up with technological advances themselves.

TR10

The challenges for firms have moved on. In the first three years of the report the biggest financial technology challenge facing firms was that of the need to upgrade legacy systems and processes. This year the top three challenges are expected to be the need to keep up with technology advancements; perceived budgetary limitations, lack of investment and cost, and then data security.

Focus : Cyber risk

Cyber risk and the need to be cyber-resilient is a major challenge for financial services firms which are targets for hackers. They must be prepared and be able to respond to any kind of cyber incident. Good customer outcomes will be under threat if cyber resilience fails.

One of the most prevalent forms of cyber attack is ransomware. There are different types of ransomware, all of which will seek to prevent a firm or an individual from using their IT systems and will ask for something (usually payment of a ransom) to be done before access will be restored. Even then, there is no guarantee that paying the fine or acceding to the ransomware attacker’s demands will restore full access to all IT systems, data or files. Many firms have found that critical files often containing client data have been encrypted as part of an attack and large amounts of money are demanded for restoration. Encryption is in this instance used as a weapon and it can be practically impossible to reverse-engineer the encryption or “crack” the files without the original encryption key – which cyber attackers deliberately withhold. What was previously viewed often as an IT problem has become a significant issue for risk and compliance functions. The regulatory stance is typified by the UK Financial Conduct Authority (FCA) which has said its goal is to “help firms become more resilient to cyber attacks, while ensuring that consumers are protected and market integrity is upheld”. Regulators do not expect firms to be impervious but do expect cyber risk management to become a core competency.

Good and better practice on defending against ransomware attacks Risk and compliance officers do not need to become technological experts overnight but must ensure cyber risks are effectively managed and reported on within their firm’s corporate governance framework. For some compliance officers, cyber risk may be well outside their comfort zone but there is evidence that simple steps implemented rigorously can go a long way towards protecting a firm and its customers. Any basic cyber-security hygiene aimed at protecting businesses from ransomware attacks should make full use of the wide range of resources available on cyber resilience, IT security and protecting against malware attacks. The UK National Cyber Security Centre has produced some practical guidance on how organizations can protect themselves in cyberspace, which it updates regularly. Indeed, the NCSC’s 10 steps to cyber security have now been adopted by most of the FTSE350.

TR11

Closing thoughts

The financial services industry has much to gain from the effective implementation of fintech, regtech and insurtech but practical reality is there are numerous challenges to overcome before the potential benefits can be realised. Investment continues to be needed in skill sets, systems upgrades and cyber resilience before firms can deliver technological innovation without endangering good customer outcomes.

An added complication is the business need to innovate while looking over one shoulder at the threat posed by bigtech. There are also concerns for solution providers. The last year has seen many technology start-ups going bust and far fewer new start-ups getting off the ground – an apparent parallel, at least on the surface, to the bubble that was around dotcom. Solutions need to be practical, providers need to be careful not to over promise and under deliver and above all developments should be aimed at genuine problems and not be solutions looking for a problem. There are nevertheless potentially substantive benefits to be gained from implementing fintech, regtech and insurtech solutions. For risk and compliance functions much of the benefit may come from the ability to automate rote processes with increasing accuracy and speed. Indeed, when 900 respondents to the 10th annual cost of compliance survey report were asked to look into their crystal balls and predict the biggest change for compliance in the next 10 years, the largest response was automation.

Technology and its failure or misuse is increasingly being linked to the personal liability and accountability of senior managers. Chief executives, board members and other senior individuals will be held accountable for failures in technology and should therefore ensure their skill set is up-to-date. Regulators and politicians alike have shown themselves to be increasingly intolerant of senior managers who fail to take the expected reasonable steps with regards to any lack of resilience in their firm’s technology.

This year’s findings suggest firms may find it beneficial to consider:

  • Is fintech (and regtech) properly considered as part of the firm’s strategy? It is important for regtech especially not to be forgotten about in strategic terms: a systemic failure arising from a regtech solution has great capacity to cause problems for the firm – the UK FCA’s actions on regulatory reporting, among other things, are an indicator of this.
  • Not all firms seem to have fully tackled the governance challenge fintech implies: greater specialist skills may be needed at board level and in risk and compliance functions.
  • Lack of in-house skills was given as a main reason for failing to develop fintech or regtech solutions. It is heartening that firms understand the need for those skills. As fintech/regtech becomes mainstream, however, firms may be pressed into developing such solutions. Is there a plan in place to plug the skills gap?
  • Only 22% of firms reported that they need more resources to evaluate, understand and deploy fintech/ regtech solutions. This suggests 78% of firms are unduly relaxed about the resources needed in the second line of defence to ensure fintech/regtech solutions are properly monitored. This may be a correct conclusion, but seems potentially bullish.

Click here to access Thomson Reuters’ Survey Results

Benchmarking digital risk factors facing financial service firms

Risk management is the foundation upon which financial institutions are built. Recognizing risk in all its forms—measuring it, managing it, mitigating it—are all critical to success. But has every firm achieved that goal? It doesn’t take indepth research beyond the myriad of breach headlines to answer that question.

But many important questions remain: What are key dimensions of the financial sector Internet risk surface? How does that surface compare to other sectors? Which specific industries within Financial Services appear to be managing that risk better than others? We take up these questions and more in this report.

  1. The financial sector boasts the lowest rate of high and critical security exposures among all sectors. This indicates they’re doing a good job managing risk overall.
  2. But not all types of financial service firms appear to be managing risk equally well. For example, the rate of severe findings in the smallest commercial banks is 4x higher than that of the largest banks.
  3. It’s not just small community banks struggling, however. Securities and Commodities firms show a disconcerting combination of having the largest deployment of high-value assets AND the highest rate of critical security exposures.
  4. Others appear to be exceeding the norm. Take credit card issuers: they typically have the largest Internet footprint but balance that by maintaining the lowest rate of security exposures.
  5. Many other challenges and risk factors exist. For instance, the industry average rate of severe security findings in critical cloud-based assets is 3.5x that of assets hosted on-premises.

Dimensions of the Financial Sector Risk Surface

As Digital Transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. We view the risk surface as anywhere an organization’s ability to operate, reputation, assets, legal obligations, or regulatory compliance is at risk. The aspects of a firm’s risk exposure that are associated with or observable from the internet are considered its internet risk surface. In Figure 1, we compare five key dimensions of the internet risk surface across different industries and highlight where the financial sector ranks among them.

  • Hosts: Number of internet-facing assets associated with an organization.
  • Providers: Number of external service providers used across hosts.
  • Geography: Measure of the geographic distribution of a firm’s hosts.
  • Asset Value: Rating of the data sensitivity and business criticality of hosts based on multiple observed indicators. High value systems that include those that collect GDPR and CCPA regulated information.
  • Findings: Security-relevant issues that expose hosts to various threats, following the CVSS rating scale.

TR1

The values recorded in Figure 1 for these dimensions represent what’s “typical” (as measured by the mean or median) among organizations within each sector. There’s a huge amount of variation, meaning not all financial institutions operate more external hosts than all realtors, but what you see here is the general pattern. The blue highlights trace the ranking of Finance along each dimension.

Financial firms are undoubtedly aware of these tendencies and the need to protect those valuable assets. What’s more, that awareness appears to translate fairly effectively into action. Finance boasts the lowest rate of high and critical security exposures among all sectors. We also ran the numbers specific to high-value assets, and financial institutions show the lowest exposure rates there too. All of this aligns pretty well with expectations—financial firms keep a tight rein on their valuable Internet-exposed assets.

This control tendency becomes even more apparent when examining the distribution of hosts with severe findings in Figure 2. Blue dots mark the average exposure rate for the entire sector (and correspond to values in Figure 1), while the grey bars indicate the amount of variation among individual organizations within each sector. The fact that Finance exhibits the least variation shows that even rotten apples don’t fall as far from the Finance tree as they often do in other sectors. Perhaps a rising tide lifts all boats?

TR2

Security Exposures in Financial Cloud Deployments

We now know financial institutions do well minimizing security findings, but does that record stand equally strong across all infrastructure? Figure 3 answers that question by featuring four of the five key risk surface dimensions:

  • the proportion of hosts (square size),
  • asset value (columns),
  • hosting location (rows),
  • and the rate of severe security findings (color scale and value label).

This view facilitates a range of comparisons, including the relative proportion of assets hosted internally vs. in the cloud, how asset value distributes across hosting locales, and where high-severity issues accumulate.

TR3

From Figure 3, box sizes indicate that organizations in the financial sector host a majority of their Internet-facing systems on-premises, but do leverage the cloud to a greater degree for low-value assets. The bright red box makes it apparent that security exposures concentrate more acutely in high-value assets hosted in the cloud. Overall, the rate of severe findings in cloud-based assets is 3.5x that of on-prem. This suggests the angst many financial firms have over moving to the cloud does indeed have some merit. But when we examine the Finance sector relative to others in Figure 4 the intensity of exposures in critical cloud assets appears much less drastic.

In Figure 3, we can see that the largest number of hosts are on-prem and of medium value. But high-value assets in the cloud exhibit the highest rate of findings.

Given that cloud vs. on-prem exposure disparity, we feel the need to caution against jumping to conclusions. We could interpret these results to proclaim that the cloud isn’t ready for financial applications and should be avoided. Another interpretation could suggest that it’s more about organizational readiness for the cloud than the inherent insecurity of the cloud. Either way, it appears that many financial institutions migrating to the cloud are handling that paradigm shift better than others.

It must also be noted that not all cloud environments are the same. Our Cloud Risk Surface report discovered an average 12X difference between cloud providers with the highest and lowest exposure rates. We still believe this says more about the typical users and use cases of the various cloud platforms than any intrinsic security inequalities. But at the same time, we recommend evaluating cloud providers based on internal features as well as tools and guidance they make available to assist customers in securing their environments. Certain clouds are undoubtedly a better match for financial services use cases while others less so.

TR4

Risk Surface of Subsectors within Financial Services

Having compared Finance to other sectors at a high level, we now examine the risk surface of major subsectors of financial services according to the following NAICS designations:

  • Insurance Carriers: Institutions engaged in underwriting and selling annuities, insurance policies, and benefits.
  • Credit Intermediation: Includes banks, savings institutions, credit card issuers, loan brokers, and processors, etc.
  • Securities & Commodities: Investment banks, brokerages, securities exchanges, portfolio management, etc.
  • Central Banks: Monetary authorities that issue currency, manage national money supply and reserves, etc.
  • Funds & Trusts: Funds and programs that pool securities or other assets on behalf of shareholders or beneficiaries.

TR5

Figure 5 compares these Finance subsectors along the same dimensions used in Figure 1. At the top, we see that Insurance Carriers generally maintain a large Internet surface area (hosts, providers, countries), but a comparatively lower ranking for asset value and security findings. The Credit Intermediation subsector (the NAICS designation that includes banks, brokers, creditors, and processors) follows a similar pattern. This indicates that such organizations are, by and large, able to maintain some level of control over their expanding risk surface.

A leading percentage of high-value assets and a leading percentage of highly critical security findings for the Securities and Commodities subsector is a disconcerting combination. It suggests either unusually high risk tolerance or ineffective risk management (or both), leaving those valuable assets overexposed. The Funds and Trusts subsector exhibits a more riskaverse approach to minimizing exposures across its relatively small digital footprint of valuable assets.

Risk Surface across Banking Institutions

Given that the financial sector is so broad, we thought a closer examination of the risk surface particular to banking institutions was in order. Banks have long concerned themselves with risk. Well before the rise of the Internet or mobile technologies, banks made their profits by determining how to gauge the risk of potential borrowers or loans, plotting the risk and reward of offering various deposit and investment products, or entering different markets, allowing access through several delivery channels. It could be said that the successful management and measurement of risk throughout an organization is perhaps the key factor that has always determined the relative success or failure of any bank.

As a highly-regulated industry in most countries, banking institutions must also consider risk from more than a business or operational perspective. They must take into account the compliance requirements to limit risk in various areas, and ensure that they are properly securing their systems and services in a way that meets regulatory standards. Such pressures undoubtedly affect the risk surface and Figure 6 hints at those effects on different types of banking institutions.

Credit card issuers earn the honored distinction of having the largest average number of Internet-facing hosts (by far) while achieving the lowest prevalence of severe security findings. Credit unions flip this trend with the fewest hosts and most prevalent findings. This likely reflects the perennial struggle of credit unions to get the most bang from their buck.

Traditionally well-resourced commercial banks leverage the most third party providers and have a presence in more countries, all with a better-than-average exposure rate. Our previous research revealed that commercial banks were among the top two generators and receivers of multi-party cyber incidents, possibly due to the size and spread of their risk surface.

TR6

Two Things to Consider

  1. In this interconnected world, third-party and fourth-party risk is your risk. If you are a financial institution, particularly a commercial bank, take a moment to congratulate yourself on managing risk well – but only for a moment. Why? Because every enterprise is critically dependent on a wide array of vendors and partners that span a broad spectrum of industries. Their risk is your risk. The work of your third-party risk team is critically important in holding your vendors accountable to managing your risk interests well.
  2. Managing risk—whether internal or third-party—requires focus. There are simply too many things to do, giving rise to the endless “hamster wheel of risk management.” A better approach starts with obtaining an accurate picture of your risk surface and the critical exposures across it. This includes third-party relationships, and now fourth-party risk, which bank regulators are now requiring. Do you have the resources to sufficiently manage this? Do you know your risk surface?

Click here to access Riskrecon Cyentia’s Study

Uncertainty Visualization

Uncertainty is inherent to most data and can enter the analysis pipeline during the measurement, modeling, and forecasting phases. Effectively communicating uncertainty is necessary for establishing scientific transparency. Further, people commonly assume that there is uncertainty in data analysis, and they need to know the nature of the uncertainty to make informed decisions.

However, understanding even the most conventional communications of uncertainty is highly challenging for novices and experts alike, which is due in part to the abstract nature of probability and ineffective communication techniques. Reasoning with uncertainty is unilaterally difficult, but researchers are revealing how some types of visualizations can improve decision-making in a variety of diverse contexts,

  • from hazard forecasting,
  • to healthcare communication,
  • to everyday decisions about transit.

Scholars have distinguished different types of uncertainty, including

  • aleatoric (irreducible randomness inherent in a process),
  • epistemic (uncertainty from a lack of knowledge that could theoretically be reduced given more information),
  • and ontological uncertainty (uncertainty about how accurately the modeling describes reality, which can only be described subjectively).

The term risk is also used in some decision-making fields to refer to quantified forms of aleatoric and epistemic uncertainty, whereas uncertainty is reserved for potential error or bias that remains unquantified. Here we use the term uncertainty to refer to quantified uncertainty that can be visualized, most commonly a probability distribution. This article begins with a brief overview of the common uncertainty visualization techniques and then elaborates on the cognitive theories that describe how the approaches influence judgments. The goal is to provide readers with the necessary theoretical infrastructure to critically evaluate the various visualization techniques in the context of their own audience and design constraints. Importantly, there is no one-size-fits-all uncertainty visualization approach guaranteed to improve decisions in all domains, nor even guarantees that presenting uncertainty to readers will necessarily improve judgments or trust. Therefore, visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process.

Uncertainty Visualization Design Space

There are two broad categories of uncertainty visualization techniques. The first are graphical annotations that can be used to show properties of a distribution, such as the mean, confidence/credible intervals, and distributional moments.

Numerous visualization techniques use the composition of marks (i.e., geometric primitives, such as dots, lines, and icons) to display uncertainty directly, as in error bars depicting confidence or credible intervals. Other approaches use marks to display uncertainty implicitly as an inherent property of the visualization. For example, hypothetical outcome plots (HOPs) are random draws from a distribution that are presented in an animated sequence, allowing viewers to form an intuitive impression of the uncertainty as they watch.

The second category of techniques focuses on mapping probability or confidence to a visual encoding channel. Visual encoding channels define the appearance of marks using controls such as color, position, and transparency. Techniques that use encoding channels have the added benefit of adjusting a mark that is already in use, such as making a mark more transparent if the uncertainty is high. Marks and encodings that both communicate uncertainty can be combined to create hybrid approaches, such as in contour box plots and probability density and interval plots.

More expressive visualizations provide a fuller picture of the data by depicting more properties, such as the nature of the distribution and outliers, which can be lost with intervals. Other work proposes that showing distributional information in a frequency format (e.g., 1 out of 10 rather than 10%) more naturally matches how people think about uncertainty and can improve performance.

Visualizations that represent frequencies tend to be highly effective communication tools, particularly for individuals with low numeracy (e.g., inability to work with numbers), and can help people overcome various decision-making biases.

Researchers have dedicated a significant amount of work to examining which visual encodings are most appropriate for communicating uncertainty, notably in geographic information systems and cartography. One goal of these approaches is to evoke a sensation of uncertainty, for example, using fuzziness, fogginess, or blur.

Other work that examines uncertainty encodings also seeks to make looking-up values more difficult when the uncertainty is high, such as value-suppressing color pallets.

Given that there is no one-size-fits-all technique, in the following sections, we detail the emerging cognitive theories that describe how and why each visualization technique functions.

VU1

Uncertainty Visualization Theories

The empirical evaluation of uncertainty visualizations is challenging. Many user experience goals (e.g., memorability, engagement, and enjoyment) and performance metrics (e.g., speed, accuracy, and cognitive load) can be considered when evaluating uncertainty visualizations. Beyond identifying the metrics of evaluation, even the most simple tasks have countless configurations. As a result, it is hard for any single study to sufficiently test the effects of a visualization to ensure that it is appropriate to use in all cases. Visualization guidelines based on a single or small set of studies are potentially incomplete. Theories can help bridge the gap between visualizations studies by identifying and synthesizing converging evidence, with the goal of helping scientists make predictions about how a visualization will be used. Understanding foundational theoretical frameworks will empower designers to think critically about the design constraints in their work and generate optimal solutions for their unique applications. The theories detailed in the next sections are only those that have mounting support from numerous evidence-based studies in various contexts. As an overview, The table provides a summary of the dominant theories in uncertainty visualization, along with proposed visualization techniques.

UV2

General Discussion

There are no one-size-fits-all uncertainty visualization approaches, which is why visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process. This article overviews many of the common uncertainty visualization techniques and the cognitive theory that describes how and why they function, to help designers think critically about their design choices. We focused on the uncertainty visualization methods and cognitive theories that have received the most support from converging measures (e.g., the practice of testing hypotheses in multiple ways), but there are many approaches not covered in this article that will likely prove to be exceptional visualization techniques in the future.

There is no single visualization technique we endorse, but there are some that should be critically considered before employing them. Intervals, such as error bars and the Cone of Uncertainty, can be particularly challenging for viewers. If a designer needs to show an interval, we also recommend displaying information that is more representative, such as a scatterplot, violin plot, gradient plot, ensemble plot, quantile dotplot, or HOP. Just showing an interval alone could lead people to conceptualize the data as categorical. As alluded to in the prior paragraph, combining various uncertainty visualization approaches may be a way to overcome issues with one technique or get the best of both worlds. For example, each animated draw in a hypothetical outcome plot could leave a trace that slowly builds into a static display such as a gradient plot, or animated draws could be used to help explain the creation of a static technique such as a density plot, error bar, or quantile dotplot. Media outlets such as the New York Times have presented animated dots in a simulation to show inequalities in wealth distribution due to race. More research is needed to understand if and how various uncertainty visualization techniques function together. It is possible that combining techniques is useful in some cases, but new and undocumented issues may arise when approaches are combined.

In closing, we stress the importance of empirically testing each uncertainty visualization approach. As noted in numerous papers, the way that people reason with uncertainty is non-intuitive, which can be exacerbated when uncertainty information is communicated visually. Evaluating uncertainty visualizations can also be challenging, but it is necessary to ensure that people correctly interpret a display. A recent survey of uncertainty visualization evaluations offers practical guidance on how to test uncertainty visualization techniques.

Click her to access the entire article in Handbook of Computational Statistics and Data Science

The exponential digital social world

The exponential digital social world

Tech-savvy start-ups with natively digital business models regard this point in time as the best time in the history of the world to invent something. The world is buzzing with technology-driven opportunities leveraging the solid platform provided over the past 30 years, birthed from

  • the Internet,
  • then mobility,
  • social
  • and now the massive scale of cloud computing and the Internet of Things (IoT).

For the start-up community, this is a

  • platform for invention,
  • coupled with lowered / disrupted barriers,
  • access to venture capital,
  • better risk / benefit ratios
  • and higher returns through organisational agility.

Kevin Kelly, co-founder of Wired magazine believes we are poised to create truly great things and that what’s coming is exponentially different, beyond what we envisage today – ‘Today truly is a wide open frontier. We are all becoming. It is the best time ever in human history to begin’ (June 2016). Throughout history, there have been major economic and societal shifts and the revolutionary nature of these is only apparent retrospectively – at the time the changes were experienced as linear and evolutionary. But now is different. Information access is globalised and is seen as a democratic right for first world citizens and a human right for the less advantaged.

The genesis was the Internet and the scale is now exponential because cloud-based platforms embed connections between data, people and things into the very fabric of business and daily life. Economies are information and services-based and knowledge is a valued currency. This plays out at a global, regional, community and household level. Pro-active leaders of governments, businesses and communities addressing these trends stress the need for innovation and transformative change (vs incremental) to shape future economies and societies across the next few years. In a far reaching example of transformative vision and action, Japan is undertaking ‘Society 5.0’, a full national transformation strategy including policy, national digitisation projects and deep cultural changes. Society 5.0 sits atop a model of five waves of societal evolution to a ‘super smart society’. The ultimate state (5.0) is achieved through applying technological advancements to enrich the opportunities, knowledge and quality of life for people of all ages and abilities.

DD1

The Society 5.0 collaboration goes further than the digitisation of individual businesses and the economy, it includes all levels of the Japanese society, and the transformation of society itself. Society 5.0 is a framework to tackle several macro challenges that are amplified in Japan, such as an ageing population – today, 26.3% of the Japanese population is over 65, for the rest of the world, 20% of people will be over 60 by 2020. Japan is responding through the digitisation of healthcare systems and solutions. The increased mobility and flexibility of work to keep people engaged in meaningful employment, and the digitisation of social infrastructure across communities and into homes. This journey is paved with important technology-enabled advances, such as

  • IoT,
  • robotics,
  • artificial intelligence,
  • virtual and augmented reality,
  • big data analytics
  • and the integration of cyber and physical systems.

Japan’s transformation approach is about more than embracing digital, it navigates the perfect storm of technology change and profound changes in culture, society and business models. Globally, we are all facing four convergent forces that are shaping the fabric of 21st century life.

  • It’s the digital social world – engaging meaningfully with people matters, not merely transacting
  • Generational tipping point – millennials now have the numbers as consumers and workers, their value systems and ways of doing and being are profoundly different
  • Business models – your value chain is no longer linear, you are becoming either an ecosystem platform or a player / supplier into that ecosystem
  • Digital is ubiquitous – like particles in the atmosphere, digital is all around us, connecting people, data and things – it’s the essence of 21st century endeavours

How do leaders of our iconic, successful industrial era businesses view this landscape? Leaders across organisations, governments and communities are alert to the opportunities and threats from an always on economy. Not all leaders are confident they have a cohesive strategy and the right resources to execute a transformative plan for success in this new economy of knowledge, digital systems and the associated intangible assets – the digital social era. RocketSpace, a global ecosystem providing a network of campuses for start-up acceleration, estimate that 10 years from now, in 2027, 75% of today’s S&P 500 will be replaced by digital start-ups (RocketSpace Disruption Brief, March 2017). Even accounting for some potential skew in this estimate, we are in the midst of unprecedented change.

What is change about?

What are the strategic assets and capabilities that an organisation needs to have when bridging from the analogue to the digital world? Key to succeeding in this is taking the culture and business models behind successful start-ups and imbuing them into the mature enterprise. Organisations need to employ outside-in, stakeholder-centric design-thinking and adopt leveraged business models that create

  • scaled resources,
  • agility,
  • diversity of ideas

and headspace to

  • explore,
  • experiment,
  • fail and try again.

The need to protect existing assets and sources of value creation remains important. However, what drives value is changing, so a revaluation of portfolios is needed against a new balance sheet, the digital social balance sheet.

The Dimension Data Digital Social Balance Sheet evolved from analysing transformational activities with our global clients from the S&P500, the government sector, education and public health sectors and not-for-profits. We also learnt from collaborations with tech start-ups and our parent company, Nippon Telegraph and Telephone Group’s (NTT) R&D investment activities, where they create collaborative ecosystems referred to as B2B2X. The balance sheet represents the seven top level strategic capabilities driving business value creation in the digital social era. This holds across all industries, though it may be expressed differently and have different relative emphasis for various sectors – for example, stakeholders may include employees, partners, e-lance collaborators, customers, patients, shareholders or a congregation.

DD2

Across each capability we have defined five levels of maturity and this extends the balance sheet into the Dimension Data Digital Enterprise Capability Maturity Model. This is an holistic, globally standardised framework. From this innovative tool, organisations can

  • assess themselves today,
  • specify their target state,
  • conduct competitive benchmarking,
  • and map out a clear pathway of transitions for their business and stakeholders.

The framework can also be applied to construct your digital balance sheet reporting – values and measures can be monitored against organisational objectives.

Where does your organisation sit? Thinking about your best and worst experiences with a business or government organisation this year, what is revealed about their capabilities? Across each of the pillars of this model, technology is a foundation and an enabler of progressive maturity. For example, effective data architecture and data management platforming underpins the information value capability of responsiveness. A meaningful capability will be enabled by the virtual integration of hybrid data sources (internal systems, external systems, machines, sensors, social) for enhanced perception, discovery, insight and action by both knowledge workers and AI agents. Uber is a leading innovator in this, and is also applying deep learning, to predict demand and direct supply, not just in time, but just before time. In this, they are exploring beyond today’s proven and mainstream capabilities to generate unique business value.

Below is a high level assessment of three leading digitals at this point in their business evolution – Uber, Alibaba and the Estonian government. We infer their capabilities from our research of their organisational journeys and milestones, using published material such as articles and case studies, as well as our personal experiences engaging with their platforms. Note that each of these businesses’ capabilities are roughly in alignment across the seven pillars – this is key to sustainable value creation. For example, an updated online presence aimed at improving user experience delivers limited value if not integrated in real time across all channels, with information leveraged to learn and deepen engagement and processes designed around user context, able to adapt to fulfil the point in time need.

DD3

Innovation horizons

In the model below, key technology trends are shown. We have set out a view of their progression to exponential breakthrough (x axis) and the points at which these technologies will reach the peak of the adoption curve, flipping from early to late adopters (y axis). Relating this to the Digital Enterprise Capability Maturity Model, level 1 and 2 capabilities derive from what are now mature foundations (past). Level 3 aligns with what is different and has already achieved the exponential breakthrough point. Progressing to level 4 requires a preparedness to innovate and experiment with what is different and beyond. Level 5 entails an appetite to be a first mover, experimenting with technologies that will not be commercial for five to ten years, but potentially provide significant first mover advantage. This is where innovators such as Elon Musk’s horizons are set with Tesla and SpaceX.

An example of all of this coming together at level 3 of digital capability maturity and the different horizon – involving cloud, mobility, big data, analytics, IoT and cybersecurity – to enable a business to transform, is Amoury Sport Organisation (A.S.O.) and their running of the Tour de France. The Tour was conceived in 1903 as an event to promote and sell A.S.O.’s publications and is today the most watched annual sporting event in the world. Spectators, athletes and coaches are hungry for details and insights into the race and the athletes. Starting from the 2015 Tour, A.S.O. has leapt forward as a digital business. Data collected from sensors connected to the cyclist’s bike is aggregated on a secure, cloud-based, big data platform, analysed in real time and turned into entertaining insights and valuable performance statistics for followers and stakeholders of the Tour. This has opened up new avenues of monetisation for A.S.O. Dimension Data is the technology services partner enabling this IoT-based business platform.

DD4

If your organisation is not yet on the technology transformation path, consider starting now. For business to prosper from the digital economy, you must be platformed to enable success – ready and capable to seamlessly connect humans, machines and data and to assure secure ecosystem flows. The settings of our homes, cars, schools and learning institutions, health and fitness establishments, offices, cities, retail outlets, factories, defence forces, emergency services, logistics providers and other services are all becoming forever different in this digital atmosphere.

Where is your innovation horizon set? The majority of our co-innovation agendas with our clients are focused on the beyond horizons. In relation to this, we see four pairs of interlinked technologies being most impactful

  • artificial intelligence and robotics;
  • virtual/ augmented reality and the human machine interface;
  • nano-technology and 3D/4D printing,
  • and cybersecurity and the blockchain.

Artificial intelligence and robotics

Artificial intelligence (AI) is both a science and set of technologies inspired by the way humans sense, perceive, learn, reason, and act.

We are rapidly consuming AI and embedding it into our daily living, taking it for granted. Think about how we rely upon GPS and location services, use Google for knowledge, expect Facebook to identify and tag faces, ask Amazon to recommend a good read and Spotify to generate a personalised music list, not so long ago, these technologies were awe-inspiring.

Now, and into the next 15 years, there is an AI revolution underway, a constellation of different technologies coming together to propel AI forward as a central force in society. Our relationships with machines will become more nuanced and personalised. There’s a lot to contemplate here. We really are at a juncture where discussion is needed at all levels about the ways that we will and won’t deploy AI to promote democracy and prosperity and equitably share the wealth created from it.

The areas in which this will have the fastest impact are transportation, traditional employment and workplaces, the home, healthcare, education, public safety and security and entertainment. Let’s look at examples from some of these settings:

Transportation – Autonomous vehicles encapsulate IoT, all forms of machine learning, computer vision and also robotics. This will soon break through the exponential point, once the physical hardware systems are robust enough.

Healthcare – there is significant potential for use of AI in pure and applied research and healthcare service delivery, as well as aged and disability related services. The collection of data from clinical equipment e.g. MRI scanners and surgical robots, clinical electronic health records, facility-based room sensors, personal monitoring devices, and mobile apps is allowing for more complete digital health records to be compiled. Analysis of these records will evolve clinical understandings. For example, NTT Data provides a Unified Clinical Archive Service for radiologists, providing machine learning interpretation of MRI brain imagery. The service provides digital translations of MRI brain scans and contains complete data sets of normal brain functions (gathered from John Hopkins University in the US). Radiologists are able to quantitatively evaluate their patient results with the normal population to improve diagnostics. Each new dataset adds to the ecosystem of knowledge.

Education – AI promises to enhance education at all levels, particularly in providing personalisation at scale for all learners. Interactive machine tutors are now being matched to students. Learning analytics can detect how a student is feeling, how they will perform and what the best likely interventions to improve learning outcomes are. Online learning has also enabled great teachers to boost their class numbers to worldwide audiences, while at the same time, student’s individual learning needs can be augmented through analysis of their response to the global mentor. Postgraduate and professional learning is set to become more modular and flexible, with AI used to assess current skills and work related projects and match learning modules of most immediate career value – an assemble your own degree approach. Virtual reality along with AI, is also changing learning content and pathways to mastery, and so will be highly impactful. AI will never replace good teaching, and so the meaningful integration of AI with face-to-face teaching will be key.

Public safety and securityCybersecurity is a key area for applied AI. Machine learning from AI against the datasets from ubiquitously placed cameras and drones for surveillance is a key area. In areas of tax, financial services, insurance and international policing, algorithms are improving the conduct of fraud investigations. A significant driver for advances in deep learning, particularly in video and audio processing has come off the back of anti-terrorist analytics. All of these things are now coming together in emergency response planning and orchestration and in the emerging field of predictive policing.

Virtual reality/augmented reality and the human machine interface

The lines between the physical and digital worlds are merging, along the ‘virtuality’ continuum of augmented and virtual reality. Augmented reality (AR) technologies overlay digital information on the ‘real world’, the digital information is delivered via a mechanism, such as a heads-up display, smart glass wall or wrist display. Virtual reality (VR) immerses a person in an artificial environment where they interact with data, their visual senses (and others) controlled by the VR system. Augmented virtuality blends AR and VR. As virtuality becomes part of our daily lives, the way we will interact with each other, learn, work, and transact are being re-shaped.

At the 2017 NTT R&D Fair in Tokyo, the use of VR in sports coaching and the spectator experience was showcased, with participants able to experience playing against elite tennis and baseball players and riding in the Tour de France. A VR spectator experience also enabled the direct experience the rider’s view and the sensation of the rider’s heart rate and fatigue levels. These applications of VR and AI are being rapidly incorporated into sports analytics and coaching.

Other enterprise VR use cases include

  • teaching peacekeeping skills to troops in conflict zones,
  • the creation of travel adventures,
  • immersion in snowy climate terrain to reduce pain for burn victims,
  • teaching autistic teenagers to drive,
  • and 3D visualisations of organs prior to conducting surgery.

It isn’t hard to imagine the impact on educational and therapeutic services, government service delivery, a shopping experience, on social and cultural immersion for remote communities and on future business process design and product engineering.

Your transformation journey

Every business is becoming a digital business. Some businesses are being caught off guard by the pace and nature of change. They are finding themselves reactive, pulled into the digital social world by the forces of disruption and the new rules of engagement set by clients, consumers, partners, workers and competitors. Getting on the front foot is important in order to control your destiny and assure future success. The disruptive forces upon us present opportunities to create a new future and value for your organisation and stakeholders. There are also risks, but the risk management approach of doing nothing is not viable in these times.

Perhaps your boardroom and executive discussions need to step back from thinking about the evolution of the current business and think in an unconstrained ‘the art of possible’ manner as to the impact of the global digital disruption and sources of value creation into the future. What are the opportunities, threats and risks that these provide? What is in the best interests of the shareholders? How will you retain and improve your sector competitiveness and use digital to diversify?

Is a new industry play now possible? Is your transformed digital business creating the ecosystem (acting as a platform business) or operating within another? How will it drive the business outcomes and value you expect and some that you haven’t envisaged at this point?

The digital balance sheet and seven pillars of digital enterprise capability could be used as the paving blocks for your pathway from analogue to digital. The framework can also guide and measure your progressive journey.

DD5

Our experiences with our clients globally show us that the transformation journey is most effective when executed across three horizons of change. Effective three step horizon planning follows a pattern for course charting, with a general flow of:

  1. Establish – laying out the digital fabric to create the core building blocks for the business and executing the must do/no regret changes that will uplift and even out capability maturity to a minimum of level 2.
  2. Extend – creating an agile, cross-functional and collaborative capability across the business and executing a range of innovation experiments that create options, in parallel with the key transformative moves.
  3. Enhance – embedding the digital social balance sheet into ‘business as usual’, and particularly imbuing innovation to continuously monitor, renew and grow the organisation’s assets.

In this, there are complexities and nuances of the change, including:

  • Re-balancing of the risk vs opportunity appetite from the board
  • Acceptable ROI models
  • The ability of the organisation to absorb change
  • Dependencies across and within the balance sheet pillars
  • Maintaining transitional balance across the pillars
  • Managing finite resources – achieving operational cost savings to enable the innovation investment required to achieve the target state

The horizon plans also need to have flex – so that pace and fidelity can be dialled up or down to respond to ongoing disruption and the dynamic operational context of your organisation.

Don’t turn away from analogue wisdom, this is an advantage. Born-digital enterprises don’t have established physical channels and presence, have not experienced economic cycles and lack longitudinal wisdom. By valuing analogue experience and also embracing the essence of outside-in thinking and the new digital social business models, the executive can confidently execute.

A key learning is that the journey is also the destination – by

  • mobilising cross functional teams,
  • drawing on diverse skills and perspectives,

empowered to act using quality information that is meaningful to them – this uplifts your organisational capabilities and in itself will become one of your most valuable assets.

Click here to access Dimension Data’s detailed study

The Role of Trust in Narrowing Protection Gaps

The Geneva Association 2018 Customer Survey in 7 mature economies reveals that for half of the respondents, increased levels of trust in insurers and intermediaries would encourage additional insurance purchases, a consistent finding across all age groups. In emerging markets this share is expected to be even higher, given a widespread lack of experience with financial institutions, the relatively low presence of well-known and trusted insurer brands and a number of structural legal and regulatory shortcomings.

GA1

Against this backdrop, a comprehensive analysis of the role and nature of trust in insurance, with a focus on the retail segment, is set to offer additional important insights into how to narrow the protection gap—the difference between needed and available protection—through concerted multi-stakeholder efforts.

The analysis is based on economic definitions of trust, viewed as an ’institutional economiser’ that facilitates or even eliminates the need for various procedures of verification and proof, thereby cutting transaction costs.

In the more specific context of insurance, trust can be defined as a customer’s bet on an insurer’s future contingent actions, ranging

  • from paying claims
  • to protecting personal data
  • and ensuring the integrity of algorithms.

Trust is the lifeblood of insurance business, as its carriers sell contingent promises to pay, often at a distant and unspecified point in the future.

From that perspective, we can explore the implications of trust for both insurance demand and supply, i.e. its relevance to the size and nature of protection gaps. For example, trust influences behavioural biases such as customers’ propensity for excessive discounting, or in other words, an irrationally high preference for money today over money tomorrow that dampens demand for insurance. In addition, increased levels of trust lower customers’ sensitivity to the price of coverage.

GA2

Trust also has an important influence on the supply side of insurance. The cost loadings applied by insurers to account for fraud are significant and lead to higher premiums for honest customers. Enhanced insurer trust in their customers’ prospective honesty would enable

  • lower cost loadings,
  • less restrictive product specifications
  • and higher demand for insurance.

The potential for lower cost loadings is significant. In the U.S. alone, according to the Insurance Information Institute (2019), fraud in the property and casualty sector is estimated to cost the insurance industry more than USD 30 billion annually, about 10% of total incurred losses and loss adjustment expenses.

Another area where trust matters greatly to the supply of insurance coverage is asymmetric information. A related challenge is moral hazard, or the probability of a person exercising less care in the presence of insurance cover. In this context, however, digital technologies and modern analytics are emerging as potentially game-changing forces. Some pundits herald the end of the age of asymmetric information and argue that a proliferation of information will

  • counter adverse selection and moral hazard,
  • creating transparency (and trust) for both insurers and insureds
  • and aligning their respective interests.

Other experts caution that this ‘brave new world’ depends on the development of customers’ future privacy preferences.

One concrete example is the technology-enabled rise in peer-to-peer trust and the amplification of word-of-mouth. This general trend is now entering the world of insurance as affinity groups and other communities organise themselves through online platforms. In such business models, trust in incumbent insurance companies is replaced with trust in peer groups and the technology platforms that organise them. Another example is the blockchain. In insurance, some start-ups have pioneered the use of blockchain to improve efficiency, transparency and trust in unemployment, property and casualty, and travel insurance, for example. In more advanced markets, ecosystem partners can serve as another example of technology-enabled trust influencers.

These developments are set to usher in an era in which customer data will be a key source of competitive edge. Therefore, gaining and maintaining customers’ trust in how data is used and handled will be vitally important for insurers’ reputations. This also applies to the integrity and interpretability of artificial intelligence tools, given the potential for biases to be embedded in algorithms.

In spite of numerous trust deficits, insurers appear to be in a promising position to hold their own against technology platforms, which are under increasing scrutiny for dubious data handling practices. According to the Geneva Association 2018 Customer Survey, only 3% of all respondents (and 7% of the millennials) polled name technology platforms as their preferred conduits for buying insurance. Insurers’ future performance, in terms of responsible data handling and usage as well as algorithm building, will determine whether their current competitive edge is sustainable. It should not be taken for granted, as—especially in high-growth markets—the vast majority of insurance customers would at least be open to purchasing insurance from new entrants.

GA3

In order to substantiate a multi-stakeholder road map for narrowing protection gaps through fostering trust, we propose a triangle of determinants of trust in insurance.

  1. First, considering the performance of insurers, how an insurer services a policy and settles claims is core to building or destroying trust.
  2. Second, regarding the performance of intermediaries, it is intuitively plausible that those individuals and organisations at the frontline of the customer interface are critically important to the reputation and the level of trust placed in the insurance carrier.
  3. And third, taking into account sociodemographic factors, most recent research finds that trust in insurance is higher among females.

This research also suggests that trust in insurance decreases with age, and insurance literacy has a strong positive influence on the level of trust in insurance.

Based on this paper’s theoretical and empirical findings, we propose the following road map for ensuring that insurance markets are optimally lubricated with trust. This road map includes 3 stakeholder groups that need to act in concert: insurers (and their intermediaries), customers, and regulators/ lawmakers.

GA4

In order for insurers and their intermediaries to bolster customer trust—and enhance their contribution to society—we recommend they do the following:

  • Streamline claims settlement with processes that differentiate between honest and (potentially) dishonest customers. Delayed claims settlement, which may be attributable to procedures needed for potentially fraudulent customer behaviour, causes people to lose trust in insurers and is unfair to honest customers.
  • Increase product transparency and simplicity, with a focus on price and value. Such efforts could include aligning incentives through technology-enabled customer engagement and utilising data and analytics for simpler and clearer underwriting procedures. This may, however, entail delicate trade-offs between efficiency and privacy.
  • ‘Borrow’ trust: As a novel approach, insurers may partner with non-insurance companies or influencers to access new customers through the implied endorsement of a trusted brand or individual. Such partnerships are also essential to extending the business model of insurance beyond its traditional centre of gravity, which is the payment of claims.

Customers and their organisations are encouraged to undertake the following actions:

  • Support collective action against fraud. Insurance fraud hinders mutual trust and drives cost loadings, which are unfair to honest customers and lead to suboptimal levels of aggregate demand.
  • Engage with insurers who leverage personal data for the benefit of the customer. When insurers respond to adverse selection, they increase rates for everyone in order to cover their losses. This may cause low-risk customers to drop out of the company’s risk pool and forego coverage. ‘Real time’ underwriting methods and modern analytics are potential remedies to the undesirable effects of adverse selection.

Recommendations for policymakers and regulators are the following:

  • Protect customers. Effective customer protection is indispensable to lubricating insurance markets with trust. First, regulators should promote access to insurance through regulations that interfere with the market mechanism for rate determination or through more subtle means, such as restrictions on premium rating factors. Second, regulators should make sure that insurers have the ability to pay claims and remain solvent. This may involve timely prudential regulatory intervention.
  • Promote industry competition. There is a positive correlation between an insurance market’s competitiveness and levels of customer trust. In a competitive market, the cost to customers for switching from an underperforming insurance carrier to a more favourable competitor is relatively low. However, the cost of customer attrition for insurers is high. Therefore, in a competitive market, the onus is on insurers to perform well and satisfy customers.

Click here to access Geneva Association’s Research Debrief

 

Overview on EIOPA Consultation Paper on the Opinion on the 2020 review of Solvency II

The Solvency II Directive provides that certain areas of the framework should be reviewed by the European Commission at the latest by 1 January 2021, namely:

  • long-term guarantees measures and measures on equity risk,
  • methods, assumptions and standard parameters used when calculating the Solvency Capital Requirement standard formula,
  • Member States’ rules and supervisory authorities’ practices regarding the calculation of the Minimum Capital Requirement,
  • group supervision and capital management within a group of insurance or reinsurance undertakings.

Against that background, the European Commission issued a request to EIOPA for technical advice on the review of the Solvency II Directive in February 2019 (call for advice – CfA). The CfA covers 19 topics. In addition to topics that fall under the four areas mentioned above, the following topics are included:

  • transitional measures
  • risk margin
  • Capital Markets Union aspects
  • macroprudential issues
  • recovery and resolution
  • insurance guarantee schemes
  • freedom to provide services and freedom of establishment
  • reporting and disclosure
  • proportionality and thresholds
  • best estimate
  • own funds at solo level

EIOPA is requested to provide technical advice by 30 June 2020.

Executive summary

This consultation paper sets out technical advice for the review of Solvency II Directive. The advice is given in response to a call for advice from the European Commission. EIOPA will provide its final advice in June 2020. The call for advice comprises 19 separate topics. Broadly speaking, these can be divided into three parts.

  1. Firstly, the review of the long term guarantee measures. These measures were always foreseen as being reviewed in 2020, as specified in the Omnibus II Directive. A number of different options are being consulted on, notably on extrapolation and on the volatility adjustment.
  2. Secondly, the potential introduction of new regulatory tools in the Solvency II Directive, notably on macro-prudential issues, recovery and resolution, and insurance guarantee schemes. These new regulatory tools are considered thoroughly in the consultation.
  3. Thirdly, revisions to the existing Solvency II framework including in relation to
    • freedom of services and establishment;
    • reporting and disclosure;
    • and the solvency capital requirement.

Given that the view of EIOPA is that overall the Solvency II framework is working well, the approach here has in general been one of evolution rather than revolution. The principal exceptions arise as a result either of supervisory experience, for example in relation to cross-border business; or of the wider economic context, in particular in relation to interest rate risk. The main specific considerations and proposals of this consultation paper are as follows:

  • Considerations to choose a later starting point for the extrapolation of risk-free interest rates for the euro or to change the extrapolation method to take into account market information beyond the starting point.
  • Considerations to change the calculation of the volatility adjustment to risk-free interest rates, in particular to address overshooting effects and to reflect the illiquidity of insurance liabilities.
  • The proposal to increase the calibration of the interest rate risk submodule in line with empirical evidence. The proposal is consistent with the technical advice EIOPA provided on the Solvency Capital Requirement standard formula in 2018.
  • The proposal to include macro-prudential tools in the Solvency II Directive.
  • The proposal to establish a minimum harmonised and comprehensive recovery and resolution framework for insurance.

A background document to this consultation paper includes a qualitative assessment of the combined impact of all proposed changes. EIOPA will collect data in order to assess the quantitative combined impact and to take it into account in the decision on the proposals to be included in the advice. Beyond the changes on interest rate risk EIOPA aims in general for a balanced impact of the proposals.

The following paragraphs summarise the main content of the consulted advice per chapter.

Long-term guarantees measures and measures on equity risk

EIOPA considers to choose a later starting point for the extrapolation of risk-free interest rates for the euro or to change the extrapolation method to take into account market information beyond the starting point. Changes are considered with the aim to avoid the underestimation of technical provisions and wrong risk management incentives. The impact on the stability of solvency positions and the financial stability is taken into account. The paper sets out two approaches to calculate the volatility adjustment to the risk-free interest rates. Both approaches include application ratios to mitigate overshooting effects of the volatility adjustment and to take into account the illiquidity characteristics of the insurance liabilities the adjustment is applied to.

  • One approach also establishes a clearer split between a permanent component of the adjustment and a macroeconomic component that only exists in times of wide spreads.

EIOPA2

  • The other approach takes into account the undertakings-specific investment allocation to further address overshooting effects.

EIOPA3

Regarding the matching adjustment to risk-free interest rates the proposal is made to recognise in the Solvency Capital Requirement standard formula diversification effects with regard to matching adjustment portfolios. The advice includes proposals to strengthen the public disclosure on the long term guarantees measures and the risk management provisions for those measures.

EIOPA1

The advice includes a review of the capital requirements for equity risk and proposals on the criteria for strategic equity investments and the calculation of long-term equity investments. Because of the introduction of the capital requirement on long-term equity investments EIOPA intends to advise that the duration-based equity risk sub-module is phased out.

Technical provisions

EIOPA identified a larger number of aspects in the calculation of the best estimate of technical provisions where divergent practices among undertakings or supervisors exist. For some of these issues, where EIOPA’s convergence tools cannot ensure consistent practices, the advice sets out proposals to clarify the legal framework, mainly on

  • contract boundaries,
  • the definition of expected profits in future premiums
  • and the expense assumptions for insurance undertakings that have discontinued one product type or even their whole business.

With regard to the risk margin of technical provisions transfer values of insurance liabilities, the sensitivity of the risk margin to interest rate changes and the calculation of the risk margin for undertakings that apply the matching adjustment or the volatility adjustment were analysed. The analysis did not result in a proposal to change the calculation of the risk margin.

Own funds

EIOPA has reviewed the differences in tiering and limits approaches within the insurance and banking framework, utilising quantitative and qualitative assessment. EIOPA has found that they are justifiable in view of the differences in the business of both sectors.

EIOPA4

Solvency Capital Requirement standard formula

EIOPA confirms its advice provided in 2018 to increase the calibration of the interest rate risk sub-module. The current calibration underestimates the risk and does not take into account the possibility of a steep fall of interest rate as experienced during the past years and the existence of negative interest rates. The review

  • of the spread risk sub-module,
  • of the correlation matrices for market risks,
  • the treatment of non-proportional reinsurance,
  • and the use of external ratings

did not result in proposals for change.

Minimum Capital Requirement

Regarding the calculation of the Minimum Capital Requirement it is suggested to update the risk factors for non-life insurance risks in line with recent changes made to the risk factors for the Solvency Capital Requirement standard formula. Furthermore, proposals are made to clarify the legal provisions on noncompliance with the Minimum Capital Requirement.

EIOPA5

Reporting and disclosure

The advice proposes changes to the frequency of the Regular Supervisory Report to supervisors in order to ensure that the reporting is proportionate and supports risk-based supervision. Suggestions are made to streamline and clarify the expected content of the Regular Supervisory Report with the aim to support insurance undertakings in fulfilling their reporting task avoiding overlaps between different reporting requirements and to ensure a level playing field. Some reporting items are proposed for deletion because the information is also available through other sources. The advice includes a review of the reporting templates for insurance groups that takes into account earlier EIOPA proposals on the templates of solo undertakings and group specificities.

EIOPA proposes an auditing requirement for balance sheet at group level in order to improve the reliability and comparability of the disclosed information. It is also suggested to delete the requirement to translate the summary of that report.

Proportionality

EIOPA has reviewed the rules for exempting insurance undertakings from the Solvency II Directive, in particular the thresholds on the size of insurance business. As a result, EIOPA proposes to maintain the general approach to exemptions but to reinforce proportionality across the three pillars of the Solvency II Directive.

Regarding thresholds EIOPA proposes to double the thresholds related to technical provisions and to allow Member States to increase the current threshold for premium income from the current amount of EUR 5 million to up to EUR 25 million.

EIOPA had reviewed the simplified calculation of the standard formula and proposed improvements in 2018. In addition to that the advice includes proposals to simplify the calculation of the counterparty default risk module and for simplified approaches to immaterial risks. Proposals are made to improve the proportionality of the governance requirements for insurance and reinsurance undertakings, in particular on

  • key functions (cumulation with operational functions, cumulation of key functions other than the internal audit, cumulation of key and AMSB function)
  • own risk and solvency assessment (ORSA) (biennial report),
  • written policies (review at least once every three years)
  • and administrative, management and supervisory bodies (AMSB) ( evaluation shall include an assessment on the adequacy of the composition, effectiveness and internal governance of the administrative, management or supervisory body taking into account the nature, scale and complexity of the risks inherent in the undertaking’s business)

Proposals to improve the proportionality in reporting and disclosure of Solvency II framework were made by EIOPA in a separate consultation in July 2019.

Group supervision

EIOPA proposes a number of regulatory changes to address the current legal uncertainties regarding supervision of insurance groups under the Solvency II Directive. This is a welcomed opportunity as the regulatory framework for groups was not very specific in many cases while in others it relies on the mutatis mutandis application of solo rules without much clarifications.

In particular, there are policy proposals to ensure that the

  • definitions applicable to groups,
  • scope of application of group supervision
  • and supervision of intragroup transactions, including issues with third countries

are consistent.

Other proposals focus on the rules governing the calculation of group solvency, including own funds requirements as well as any interaction with the Financial Conglomerates Directive. The last section of the advice focuses on the uncertainties related to the application of governance requirements at group level.

Freedom to provide services and freedom of establishment

EIOPA further provides suggestions in relation to cross border business, in particular to support efficient exchange of information among national supervisory authorities during the process of authorising insurance undertakings and in case of material changes in cross-border activities. It is further recommended to enhance EIOPA’s role in the cooperation platforms that support the supervision of cross-border business.

Macro-prudential policy

EIOPA proposes to include the macroprudential perspective in the Solvency II Directive. Based on previous work, the advice develops a conceptual approach to systemic risk in insurance and then analyses the current existing tools in the Solvency II framework against the sources of systemic risk identified, concluding that there is the need for further improvements in the current framework.

EIOPA7

Against this background, EIOPA proposes a comprehensive framework, covering the tools initially considered by the European Commission (improvements in Own Risk and Solvency Assessment and the prudent person principle, as well as the drafting of systemic risk and liquidity risk management plans), as well as other tools that EIOPA considers necessary to equip national supervisory authorities with sufficient powers to address the sources of systemic risk in insurance. Among the latter, EIOPA proposes to grant national supervisory authorities with the power

  • to require a capital surcharge for systemic risk,
  • to define soft concentration thresholds,
  • to require pre-emptive recovery and resolution plans
  • and to impose a temporarily freeze on redemption rights in exceptional circumstances.

EIOPA8

Recovery and resolution

EIOPA calls for a minimum harmonised and comprehensive recovery and resolution framework for (re)insurers to deliver increased policyholder protection and financial stability in the European Union. Harmonisation of the existing frameworks and the definition of a common approach to the fundamental elements of recovery and resolution will avoid the current fragmented landscape and facilitate cross-border cooperation. In the advice, EIOPA focuses on the recovery measures including the request for pre-emptive recovery planning and early intervention measures. Subsequently, the advice covers all relevant aspects around the resolution process, such as

  • the designation of a resolution authority,
  • the resolution objectives,
  • the need for resolution planning
  • and for a wide range of resolution powers to be exercised in a proportionate way.

The last part of the advice is devoted to the triggers for

  • early intervention,
  • entry into recovery and into resolution.

EIOPA9

Other topics of the review

The review of the ongoing appropriateness of the transitional provisions included in the Solvency II Directive did not result in a proposal for changes. With regard to the fit and proper requirements of the Solvency II Directive EIOPA proposes to clarify the position of national supervisory authorities on the ongoing supervision of propriety of board members and that they should have effective powers in case qualifying shareholders are not proper. Further advice is provided in order to increase the efficiency and intensity of propriety assessments in complex cross-border cases by providing the possibility of joint assessment and use of EIOPA’s powers to assist where supervisors cannot reach a common view.

Click here to access EIOPA’s detailed Consultation Paper

Mastering Financial Customer Data at Multinational Scale

Your Customer Data…Consolidated or Chaotic?

In an ideal world, you know your customers. You know

  • who they are,
  • what business they transact,
  • who they transact with,
  • and their relationships.

You use that information to

  • calculate risk,
  • prevent fraud,
  • uncover new business opportunities,
  • and comply with regulatory requirements.

The problem at most financial institutions is that customer data environments are highly chaotic. Customer data is stored in numerous systems across the company. Most, if not all of which, has evolved over time in siloed environments according to business function. Each system has its

  • own management team,
  • technology platform,
  • data models,
  • quality issues,
  • and access policies.

Tamr1

This chaos prevents the firms from fully achieving and maintaining a consolidated view of customers and their activity.

The Cost of Chaos

A chaotic customer data environment can be an expensive problem in a financial institution. Customer changes have to be implemented in multiple systems, with a high likelihood of error or inconsistency because of manual processes. Discrepancies with the data leads to inevitable remediation activities that are widespread, and costly.

Analyzing customer data within one global bank required three months to compile and validate its correctness. The chaos leads to either

  1. prohibitively high time and cost of data preparation or
  2. garbage-in, garbage-out analytics.

The result of customer data chaos is an incredibly high risk profile — operational, regulatory, and reputational.

Eliminating the Chaos 1.0

Many financial services companies attempt to eliminate this chaos and consolidate their customer data.

A common approach is to implement a master data management (MDM) system. Customer data from different source systems is centralized into one place where it can be harmonized. The output is a “golden record,” or master customer record.

A lambda architecture permits data to stream into the centralized store and be processed in realtime so that it is immediately mastered and ready for use. Batch processes run on the centralized store to perform periodic (daily, monthly, quarterly, etc.) calculations on the data.

First-generation MDM systems centralize customer data and unify it by writing ETL scripts and matching rules.

Tamr2

The harmonizing often involves:

  1. Defining a common, master schema in which to store the consolidated data
  2. Writing ETL scripts to transform the data from source formats and schemas into the new common storage format
  3. Defining rule sets to deduplicate, match/cluster, and otherwise cleanse within the central MDM store

There are a number of commercial MDM solutions available that support the deterministic approach outlined above. The initial experience with those MDM systems, integrating the first five or so large systems, is often positive. Scaling MDM to master more and more systems, however, becomes a challenge that grows exponentially, as we’ll explain below.

Rules-based MDM, and the Robustness- Versus-Expandability Trade Off

The rule sets used to harmonize data together are usually driven off of a handful of dependent attributes—name, legal identifiers, location, and so on. Let’s say you use six attributes to stitch together four systems, A and B, and then the same six attributes between A and C, then A and D, B and C, B and D, and C and D. Within that example of 4 systems, you would have twenty four potential attributes that you are aligning. Add a fifth system, it’s 60 attributes; a sixth system, 90 attributes. So the effort to master additional systems grows exponentially. And in most multinational financial institutions, the number of synchronized attributes is not six; it’s commonly 50 to 100.

And maintenance is equally burdensome. There’s no guarantee that your six attributes maintain their validity or veracity over time. If any of these attributes need to be modified, then rules need to be redefined across the systems all over again.

The trade off for many financial institutions is robustness versus expandability. In other words, you can have a large-scale data mastering implementation and have it wildly complex, or you can do something small and have it highly accurate.

This is problematic for most financial institutions, which have very large-scale customer data challenges.

Customer Data Mastering at Scale

In larger financial services companies, especially multinationals, the number of systems in which customer data resides is much larger than the examples above. It is not uncommon to see financial companies with over 100 large systems.

Among those are systems that have been:

  • Duplicated in many countries to comply with data sovereignty regulations
  • Acquired via inorganic growth, purchased companies bringing in their own infrastructure for trading, CRM, HR, and back office. Integrating these can take a significant amount of time and cost

tamr3

When attempting to master a hundred sources containing petabytes of data, all of which have data linking and matching in different ways across a multitude of attributes and systems, you can see that the matching rules required to harmonize your data together gets incredibly complex.

Every incremental source added to the MDM environment can take thousands of rules to be implemented. Within just a mere handful of systems, the complexity gets to a point where it’s unattainable. As that complexity goes up, the cost of maintaining a rules-based approach also scales wildly, requiring more and more data stewards to make sure all the stitching rules remain correct.

Mastering data at scale is one of the riskiest endeavors a business can take. Gartner reports that 85% of MDM projects fail. And MDM budgets of $10M to $20M per year are not uncommon in large multinationals. With such high stakes, making sure that you get the right approach is critical to making sure that this thing is a success.

A New Take on an Old Paradigm

What follows is a reference architecture. The approach daisy chains together three large tool sets, each with appropriate access policies enforced, that are responsible for three separate steps in the mastering process:

  1. Raw Data Zone
  2. Common Data Zone
  3. Mastered Data Zone

tamr4

Raw Data Zone The first sits on a traditional data lake model—a landing area for raw data. Data is replicated from source systems to the centralized data repository (often built on Hadoop). Data is replicated in real time (perhaps via Kafka) wherever possible so that data is most up to date. For source systems that do not support real-time replication, nightly batch jobs or flat-file ingestion are used.

Common Data Zone Within the Common Data Zone, we take all of the data from the Raw Zone—with the various different objects, in different shapes and sizes, and conform that into outputs that look and feel the same to the system, with the same column headers, data types, and formats.

The toolset in this zone utilizes machine learning models to categorize data that exists within the Raw Data Zone. Machine learning models are trained on what certain attributes look like—what’s a legal entity, or a registered address, or country of incorporation, or legal hierarchy, or any other field. It does so without requiring anyone having to go back to the source system owners to bog them down with questions about that, saving weeks of effort.

This solution builds up a taxonomy and schema for the conformed data as raw data is processed. Unlike early-generation MDM solutions, this substantially reduces data unification time, often by months per source system, because there is:

  • No need to pre-define a schema to hold conformed data
  • No need to write ETL to transform the raw data

One multinational bank implementing this reference architecture reported being able to conform the raw data from a 10,000-table system within three days, and without using up source systems experts’ time defining a schema or writing ETL code. In terms of figuring out where relevant data is located in the vast wilderness this solution is very productive and predictable.

Mastered Data Zone In the third zone, the conformed data is mastered, and the outputs of the mastering process are clusters of records that refer to the same real-world entity. Within each cluster, a single, unified golden, master record of the entity is configured. The golden customer record is then distributed to wherever it’s needed:

  • Data warehouses
  • Regulatory (KYC, AML) compliance systems
  • Fraud and corruption monitoring
  • And back to operational systems, to keep data changes clean at the source

As with the Common Zone, machine learning models are used. These models eliminate the need to define hundreds of rules to match and deduplicate data. Tamr’s solution applies a probabilistic model that uses statistical analysis and naive Bayesian modeling to learn from existing relationships between various attributes, and then makes record-matching predictions based on these attribute relationships.

Tamr matching models require training, which usually takes just a few days per source system. Tamr presents a data steward with its predictions, and the steward can either confirm or deny them to help Tamr perfect its matching.

With the probabilistic model, Tamr looks at all of the attributes on which it has been trained, and based on the attribute matching, the solution will indicate a confidence level of a match being accurate. Depending on a configurable confidence level threshold, It will disregard entries that fall below the threshold from further analysis and training.

As you train Tamr and correct it, it becomes more accurate over time. The more data you throw at te solution, the better it gets. Which is a stark contrast to the rules-based MDM approach, where the more data you throw at it, it tends to break because the rules can’t keep up with the level of complexity.

Distribution A messaging bus (e.g., Apache Kafka) is often used to distribute mastered customer data throughout the organization. If a source system wants to pick up the master copy from the platform, it subscribes to that topic on the messaging bus to receive the feed of changes.

Another approach is to pipeline deltas from the MDM platform into target system in batch.

Real-world Results

This data mastering architecture is in production at a number of large financial institutions. Compared with traditional MDM approaches, the model-driven approach provides the following advantages:

70% fewer IT resources required:

  • Humans in the entity resolution loop are much more productive, focused on a relatively small percentage (~5%) of exceptions that the machine learning algorithms cannot resolve
  • Eliminates ETL and matching rules development
  • Reduces manual data synchronization and remediation of customer data across systems

Faster customer data unification:

  • A global retail bank mastered 35 large IT systems within 6 months—about 4 days per source system
  • New data is mastered within 24 hours of landing in the Raw Data Zone
  • A platform for mastering any category of data—customer, product, suppler, and others

Faster, more complete achievement of data-driven business initiatives:

  • KYC, AML, fraud detection, risk analysis, and others.

 

Click here to access Tamr’s detailed analysistamr4

An Animal Kingdom Of Disruptive Risks -How boards can oversee black swans, gray rhinos, and white elephants

Where was the board? As a corporate director, imagine you find yourself in one of these difficult situations:

  • Unexpected financial losses mount as your bank faces a sudden collapse during a 1-in-100-year economic crisis.
  • Customers leave and profits drop year after year as a new technology start-up takes over your No. 1 market position.
  • Negative headlines and regulatory actions besiege your company following undesirable tweets and other belligerent behavior from the CEO.

These scenarios are not hard to imagine when you consider what unfolded before the boards of Lehman Brothers, Blockbuster, Tesla, and others. In the context of disruptive risks, these events can be referred to as black swans, gray rhinos, and white elephants, respectively. While each has unique characteristics, the commonality is that all of these risks can have a major impact on a company’s profitability, competitive position, and reputation.

In a VUCA (volatile, uncertain, complex, and ambiguous) world, boards need to expand their risk governance and oversight to include disruptive risks. This article addresses three fundamental questions:

  • What are black swans, gray rhinos, and white elephants?
  • Why are they so complex and difficult to deal with?
  • How should directors incorporate these disruptive risks as part of their oversight?

Why are companies so ill prepared for disruptive risks? There are three main challenges:

  1. standard enterprise risk management (ERM) programs may not capture them;
  2. they each present unique characteristics and complexities;
  3. and cognitive biases prevent directors and executives from addressing them.

Standard tools used in ERM, including risk assessments and heat maps, are not timely or dynamic enough to capture unconventional and atypical risks. Most risk quantification models—such as earnings volatility and value-at-risk models—measure potential loss within a 95 percent or 99 percent confidence level. Black swan events, on the other hand, may have a much smaller than 0.1 percent chance of happening. Gray rhinos and white elephants are atypical risks that may have no historical precedent or operational playbooks. As such, disruptive risks may not be adequately addressed in standard ERM programs even if they have the potential to destroy the company. The characteristics and complexities of each type of disruptive risk are unique. The key challenge with black swans is prediction. They are outliers that were previously unthinkable. That is not the case with gray rhinos, since they are generally observable trends. With gray rhinos the main culprit is inertia: companies see the megatrends charging at them, but they can’t seem to mitigate the risk or seize the opportunity. The key issue with white elephants is subjectivity. These no-win situations are often highly charged with emotions and conflicts. Doing nothing is usually the easiest choice but leads to the worst possible outcome. While it is imperative to respond to disruptive risks, cognitive biases can lead to systematic errors in decision making. Behavioral economists have identified dozens of biases, but several are especially pertinent in dealing with disruptive risks:

  • Availability and hindsight bias is the underestimation of risks that we have not experienced and the overestimation of risks that we have. This bias is a key barrier to acknowledging atypical risks until it is too late.
  • Optimism bias is a tendency to overestimate the likelihood of positive outcomes and to underestimate the likelihood of negative outcomes. This is a general issue for risk management, but it is especially problematic in navigating disruptive risks.
  • Confirmation bias is the preference for information that is consistent with one’s own beliefs. This behavior prevents us from processing new and contradictory information, or from responding to early signals.
  • Groupthink or herding occurs when individuals strive for group consensus at the cost of objective assessment of alternative viewpoints. This is related to the sense of safety in being part of a larger group, regardless if their actions are rational or not.
  • Myopia or short-termism is the tendency to have a narrow view of risks and a focus on short-term results (e.g., quarterly earnings), resulting in a reluctance to invest for the longer term.
  • Status quo bias is a preference to preserve the current state. This powerful bias creates inertia and stands in the way of appropriate actions.

To overcome cognitive biases, directors must recognize that they exist and consider how they impact decision making. Moreover,

  • board diversity,
  • objective data,
  • and access to independent experts

can counter cognitive biases in the boardroom.

Recommendations for Consideration

How should directors help their organizations navigate disruptive risks? They can start by asking the right questions in the context of the organization’s business model and strategy. The chart below lists 10 questions that directors can ask themselves and management.

NACD1

In addition, directors should consider the following five recommendations to enhance their risk governance and oversight:

  1. Incorporate disruptive risks into the board agenda. The full board should discuss the potential impact of disruptive risks as part of its review of the organization’s strategy to create sustainable long-term value. Disruptive risks may also appear on the agenda of key committees, including the risk committee’s assessment of enterprise risks, the audit committee’s review of risk disclosures, the compensation committee’s determination of executive incentive plans, and the governance committee’s processes for addressing undesirable executive behavior. The key is to explicitly incorporate disruptive risks into the board’s oversight and scope of work.
  2. Ensure that fundamental ERM practices are effective. Fundamental ERM practices—risk policy and analytics, management strategies, and metrics and reporting—provide the baseline from which disruptive risks can be considered. As an example, the definition of risk appetite can inform discussions of loss tolerance relative to disruptive risks. As an early step, the board should ensure that the overall ERM framework is robust and effective. Otherwise, the organization may fall victim to “managing risk by silo” and miss critical interdependencies between disruptive risks and other enterprise risks.
  3. Consider scenario planning and analysis. Directors should recognize that basic ERM tools may not fully capture disruptive risks. They should consider advocating for, and participating in, scenario planning and analysis. This is akin to tabletop exercises for cyber-risk events, except much broader in scope. Scenario analysis can be a valuable tool to help companies put a spotlight on hidden risks, generate strategic insights on performance drivers, and identify appropriate actions for disruptive trends. The objective is not to predict the future, but to identify the key assumptions and sensitivities in the company’s business model and strategy. In addition to scenario planning, dynamic simulation models and stress-testing exercises should be considered.
  4. Ensure board-level risk metrics and reports are effective. The quality of risk reports is key to the effectiveness of board risk oversight. Standard board risk reports often are comprised of insufficient information: historical loss and event data, qualitative risk assessments, and static heat maps. An effective board risk report should include quantitative analyses of risk impacts to earnings and value, key risk metrics measured against risk appetite, and forwardlooking information on emerging risks. By leveraging scenario planning, the following reporting components can enhance disruptive risk monitoring:
    • Market intelligence data that provides directors with useful “outside-in” information, including key business and industry developments, consumer and technology trends, competitive actions, and regulatory updates.
    • Enterprise performance and risk analysis including key performance and risk indicators that quantify the organization’s sensitivities to disruptive risks.
    • Geo-mapping that highlights global “hot spots” for economic, political, regulatory, and social instability. This can also show company-specific risks such as third-party vendor, supply chain, and cybersecurity issues.
    • Early-warning indicators that provide general or scenariospecific signals with respect to risk levels, effectiveness of controls, and external drivers.
    • Action triggers and plans to facilitate timely discussions and decisions in response to disruptive risks.
  5. Strengthen board culture and governance. To effectively oversee disruptive risks, the board must be fit for purpose. This requires creating a board culture that considers nontraditional views, questions key assumptions, and supports continuous improvement. Good governance practices should be in place in the event a white elephant appears. For example, what is the board protocol and playbook if the CEO acts inappropriately? In the United States, the 25th Amendment and impeachment clauses are in place ostensibly to remove a reprehensible president. Does the organization have procedures to remove a reprehensible CEO?

The following chart summarizes the key characteristics, examples, indicators, and strategies for identifying and addressing black swans, gray rhinos, and white elephants. The end goal should be to enhance oversight of disruptive risks and counter the specific challenges that are presented. To mitigate the unpredictability of black swans, the company should develop contingency plans with a focus on preparedness. To overcome inertia and deal with gray rhinos, the company needs to establish organizational processes and incentives to increase agility. To balance subjectivity and confront white elephants, directors should invest in good governance and objective input that will support decisiveness.

NACD2

The Opportunity for Boards

In a VUCA world, corporate directors must expand their traditional risk oversight beyond well-defined strategic, operational, and financial risks. They must consider atypical risks that are hard to predict, easy to ignore, and difficult to address. While black swans, gray rhinos, and white elephants may sound like exotic events, directors could enhance their recognization of them by reflecting on their own experiences serving on boards.

Given their experiences, directors should provide a leading voice to improve oversight of disruptive risks. They have a comparative advantage in seeing the big picture based on the nature of their work— part time, detached from day-to-day operations, and with experience gained from serving different companies and industries. Directors can add significant value by providing guidance to management and helping them see the forest for the trees. Finally, there is an opportunity side to risk. There are positive and negative black swans. A company can invest in the positive ones and be prepared for the negative ones. For every company that is trampled by a gray rhino, another company is riding it to a higher level of performance. By addressing the white elephant in the boardroom, a company can remediate an unspoken but serious problem. In the current environment, board oversight of disruptive risks represents both a risk management imperative and a strategic business opportunity.

Click here to access NACD’s summary