A Practical Guide to Analytics and AI in the Cloud With Legacy Data

Introduction

Businesses that use legacy data sources such as mainframe have invested heavily in building a reliable data platform. At the same time, these enterprises want to move data into the cloud for the latest in analytics, data science and machine learning.

The Importance of Legacy Data

Mainframe is still the processing backbone for many organizations, constantly generating important business data.

It’s crucial to consider the following:

MAINFRAME IS THE ENTERPRISE TRANSACTION ENVIRONMENT

In 2019, there was a 55% increase in transaction volume on mainframe environments. Studies estimate that 2.5 billion transactions are run per day, per legacy system across the world.

LEGACY IS THE FUEL BEHIND CUSTOMER EXPERIENCES

Within industries such as financial services and insurance, most customer information lives on legacy systems. Over 70% of enterprises say their customer-facing applications are completely or very reliant on mainframe processing.

BUSINESS-CRITICAL APPLICATIONS RUN ON LEGACY SYSTEMS

Mainframe often holds business-critical information and applications — from credit card transactions to claims processing. For over half of enterprises with a mainframe, they run more than half of their business-critical applications on the platform.

However, they also present a limitation for an organization in its analytics and data science journey. While moving everything to the cloud may not be the answer, identifying ways in which you can start a legacy modernization process is crucial to the next generation of data and AI initiatives.

The Cost of Legacy Data

Across the enterprise, legacy systems such as mainframe serve as a critical piece of infrastructure that is ripe with opportunity for integration with modern analytics platforms. If a modern analytics platform is only as good as the data fed into it, that means enterprises must include all data sources for success. However, many complexities can occur when organizations look to build the data integration pipelines between their modern analytics platform and legacy sources. As a result, the plans made to connect these two areas are often easier said than done.

DATA SILOS HINDER INNOVATION

Over 60% of IT professionals with legacy and modern technology in house are finding that data silos are negatively affecting their business. As data volumes increase, IT can no longer rely on current data integration approaches to solve their silo challenges.

CLOUDY BUSINESS INSIGHTS

Business demands that more decisions are driven by data. Still, few IT professionals who work with legacy systems feel they are successful in delivering data insights that reside outside their immediate department. Data-driven insights will be the key to competitive success. The inability to provide insights puts a business at risk.

SKILLS GAP WIDENS

While it may be difficult to find skills for the latest technology, it’s becoming even harder to find skills for legacy platforms. Enterprises have only replaced 37% of the mainframe workforce lost over the past five years. As a result, the knowledge needed to integrate mainframe data into analytics platforms is disappearing. While the drive for building a modern analytics platform is more powerful than ever, taking this initiative and improving data integration practices that encompass all enterprise data has never been more challenging.

The success of building a modern analytics platform hinges on understanding the common challenges of integrating legacy data sources and choosing the right technologies that can scale with the changing needs of your organization.

Challenges Specific to Extracting Mainframe Data

With so much valuable data on mainframe, the most logical thing to do would be to connect these legacy data sources to a modern data platform. However, many complexities can occur when organizations begin to build integration pipelines to legacy sources. As a result, the plans made to connect these two areas are often easier said than done. Shared challenges of extracting mainframe data for integration with modern analytics platforms include the following:

DATA STRUCTURE

It’s common for legacy data not to be readily compatible with downstream analytics platforms, open-source frameworks and data formats. The varied structures of legacy data sources differ from relational data. Legacy data sources have traits such as

  • hierarchical tables,
  • embedded headers,
  • and trailer and complex data structures (e.g., nested, repeated or redefined elements).

With the incorrect COBOL redefines and logic set up at the start of a data integration workflow, legacy data structures risk slowing down processing speeds to the point of business disruption and can lead to incorrect data for downstream consumption.

METADATA

COBOL copybooks can be a massive hurdle to overcome for integrating mainframe data. COBOL copybooks are the metadata blocks that define the physical layout of data but are stored separately from that data. As a result, they can be quite complicated, containing not just formatting information, but also logic in the form, for example, of nested Occurs Depending On clauses. For many mainframe files, hundreds of copybooks may map to a single file. Feeding mainframe data directly into an analytics platform can result in significant data confusion.

DATA MAPPING

Unlike an RDBMS, which needs data to be entered into a table or column, nothing enforces a set data structure on the mainframe. COBOL copybooks are incredibly flexible so that they

  • can group multiple pieces into one,
  • or subdivide a field into various fields,
  • or ignore whole sections of a record.

As a result, data mapping issues will arise. The copybooks reflect the needs of the program, not the needs of a data-driven view.

DIFFERENT STORAGE FORMATS

Often numeric values stored one way on a mainframe are stored differently when the data is moving to the cloud. Additionally, mainframes use a whole different encoding scheme (EBCDIC vs. ASCII) — it’s an 8-bit structure vs. a 7-bit structure. As a result, multiple numeric encoding schemes allow for the ability to “pack” numbers into less storage (e.g., packed decimal) space. In addition to complex storage formats, there are techniques to use each individual bit to store data.

Whether it’s a lack of internal knowledge on how to handle legacy data or a rigid data framework, ignoring legacy data when building a modern data analytics platform means missing valuable information that can enhance any analytics project.

Pain Points of Building a Modern Analytics Platform

Tackling the challenges of mainframe data integration is no simple task. Besides determining the best approach for integrating these legacy data sources, IT departments are also dealing with the everyday challenges of running a department. Regardless of the size of an organization, there are daily struggles everyone faces, from siloed data to lack of IT skills.

ENVIRONMENT COMPLEXITY

Many organizations have adopted hybrid and multi-cloud strategies to

  • manage data proliferation,
  • gain flexibility,
  • reduce costs
  • and increase capacities.

Cloud storage and the lakehouse architecture offer new ways to manage and store data. However, organizations still need to maintain and integrate their mainframes and other on-premises systems — resulting in a challenging integration strategy that must encompass a variety of environments.

SILOED DATA

The increase in data silos adds further complexity to growing data volumes. Data silo creation happens as a direct result of increasing data sources. Research has shown that data silos have directly inhibited the success of analytics and machine learning projects.

PERFORMANCE

Processing the requirements of growing data volumes can cause a slowdown in a data stream. Loading hundreds, or even thousands, of database tables into a big data platform — combined with an inefficient use of system resources — can create a data bottleneck that hampers the performance of data integration pipelines.

DATA QUALITY

Industry studies have shown that up to 90% of a data scientist’s time is getting data to the right condition for use in analytics. In other words, 90% of the time, data feeding analytics cannot be trusted. Data quality processes that include

  • mapping,
  • matching,
  • linking,
  • merging,
  • deduplication
  • and actionable data

are critical to providing frameworks with trusted data.

DATA TYPES AND FORMATS

Valuable data for analytics comes from a range of sources across the organization from CRM, ERPs, mainframes and online transaction processing systems. However, as organizations rely on more systems, the data types and formats continue to grow.

IT now has the challenge of making big data, NoSQL and unstructured data all readable for downstream analytics solutions.

SKILLS GAP AND RESOURCES

The need for workers who understand how to build data integration frameworks for mainframe, cloud, and cluster data sources is increasing, but the market cannot keep up. Studies have shown that unfilled data engineer jobs and data scientist jobs have increased 12x in the past year alone. As a result, IT needs to figure out how to integrate data for analytics with the skills they have internally.

What Your Cloud Data Platform Needs

A new data management paradigm has emerged that combines the best elements of data lakes and data warehouses, enabling

  • analytics,
  • data science
  • and machine learning

on all your business data: lakehouse.

Lakehouses are enabled by a new system design: implementing similar data structures and data management features to those in a data warehouse, directly on the kind of low-cost storage used for data lakes. They are what you would get if you had to redesign data warehouses in the modern world, now that cheap and highly reliable storage (in the form of object stores) are available.

This new paradigm is the vision for data management that provides the best architecture for modern analytics and AI. It will help organizations capture data from hundreds of sources, including legacy systems, and make that data available and ready for analytics, data science and machine learning.

Lakehouse

A lakehouse has the following key features:

  • Open storage formats, such as Parquet, avoid lock-in and provide accessibility to the widest variety of analytics tools and applications
  • Decoupled storage and compute provides the ability to scale to many concurrent users by adding compute clusters that all access the same storage cluster
  • Transaction support handles failure scenarios and provides consistency when multiple jobs concurrently read and write data
  • Schema management enforces the expected schema when needed and handles evolving schemas as they change over time
  • Business intelligence tools directly access the lakehouse to query data, enabling access to the latest data without the cost and complexity of replicating data across a data lake and a data warehouse
  • Data science and machine learning tools used for advanced analytics rely on the same data repository
  • First-class support for all data types across structured, semi-structured and unstructured, plus batch and streaming data

Click here to access Databricks’ and Precisely’s White Paper

How To Build a CX Program And Transform Your Business

Customer Experience (CX) is a catchy business term that has been used for decades, and until recently, measuring and managing it was not possible. Now, with the evolution of technology, a company can build and operationalize a true CX program.

For years, companies championed NPS surveys, CSAT scores, web feedback, and other sources of data as the drivers of “Customer Experience” – however, these singular sources of data don’t give a true, comprehensive view of how customers feel, think, and act. Unfortunately, most companies aren’t capitalizing on the benefits of a CX program. Less than 10% of companies have a CX executive and of those companies, only 14% believe Customer Experience, as a program, is the aggregation and analysis of all customer interactions with the objective of uncovering and disseminating insights across the company in order to improve the experience. In a time where the customer experience separates the winners from the losers, CX must be more of a priority for ALL businesses.

This not only includes the analysis of typical channels in which customers directly interact with your company (calls, chats, emails, feedback, surveys, etc.) but all the channels in which customers may not be interacting directly with you – social, reviews, blogs, comment boards, media, etc.

CX1

In order to understand the purpose of a CX team and how it operates, you first need to understand how most businesses organize, manage, and carry out their customer experiences today.

Essentially, a company’s customer experience is owned and managed by a handful of teams. This includes, but is not limited to:

  • digital,
  • brand,
  • strategy,
  • UX,
  • retail,
  • design,
  • pricing,
  • membership,
  • logistics,
  • marketing,
  • and customer service.

All of these teams have a hand in customer experience.

In order to affirm that they are working towards a common goal, they must

  1. communicate in a timely manner,
  2. meet and discuss upcoming initiatives and projects,
  3. and discuss results along with future objectives.

In a perfect world, every team has the time and passion to accomplish these tasks to ensure the customer experience is in sync with their work. In reality, teams end up scrambling for information and understanding of how each business function is impacting the customer experience – sometimes after the CX program has already launched.

CX2

This process is extremely inefficient and can lead to serious problems across the customer experience. These problems can lead to irreparable financial losses. If business functions are not on the same page when launching an experience, it creates a broken one for customers. Siloed teams create siloed experiences.

There are plenty of companies that operate in a semi-siloed manner and feel it is successful. What these companies don’t understand is that customer experience issues often occur between the ownership of these silos, in what some refer to as the “customer experience abyss,” where no business function claims ownership. Customers react to these broken experiences by communicating their frustration through different communication channels (chats, surveys, reviews, calls, tweets, posts etc.).

For example, if a company launches a new subscription service and customers are confused about the pricing model, is it the job of customer service to explain it to customers?  What about those customers that don’t contact the business at all? Does marketing need to modify their campaigns? Maybe digital needs to edit the nomenclature online… It could be all of these things. The key is determining which will solve the poor customer experience.

The objective of a CX program is to focus deeply on what customers are saying and shift business teams to become advocates for what they say. Once advocacy is achieved, the customer experience can be improved at scale with speed and precision. A premium customer experience is the key to company growth and customer retention. How important is the customer experience?

You may be saying to yourself, “We already have teams examining our customer data, no
need to establish a new team to look at it.” While this may be true, the teams are likely taking a siloed approach to analyzing customer data by only investigating the portion of the data they own.

For example, the social team looks at social data, the digital team analyzes web feedback and analytics, the marketing team reviews surveys and performs studies, etc. Seldom do these teams come together and combine their data to get a holistic view of the customer. Furthermore, when it comes to prioritizing CX improvements, they do so based on an incomplete view of the customer.

Consolidating all customer data gives a unified view of your customers while lessening the workload and increasing the rate at which insights are generated. The experience customers have with marketing, digital, and customer service, all lead to different interactions. Breaking these interactions into different, separate components is the reason companies struggle with understanding the true customer experience and miss the big picture on how to improve it.

The CX team, once established, will be responsible for creating a unified view of the customer which will provide the company with an unbiased understanding of how customers feel about their experiences as well as their expectations of the industry. These insights will provide awareness, knowledge, and curiosity that will empower business functions to improve the end-to-end customer experience.

CX programs are disruptive. A successful CX program will uncover insights that align with current business objectives and some insights that don’t at all. So, what do you do when you run into that stone wall? How do you move forward when a business function refuses to adopt the voice of the customer? Call in back-up from an executive who understands the value of the voice of the customer and why it needs to be top-of mind for every function.

When creating a disruptive program like CX, an executive owner is needed to overcome business hurdles along the way. Ideally, this executive owner will support the program and promote it to the broader business functions. In order to scale and become more widely adopted, it is also helpful to have executive support when the program begins.

The best candidates for initial ownership are typically marketing, analytics or operations executives. Along with understanding the value a CX program can offer, they should also understand the business’ current data landscape and help provide access to these data sets. Once the CX team has access to all the available customer data, it will be able to aggregate all necessary interactions.

Executive sponsors will help dramatically in regard to CX program adoption and eventual scaling. Executive sponsors

  • can provide the funding to secure the initial success,
  • promote the program to ensure other business functions work closer to the program,
  • and remove roadblocks that may otherwise take weeks to get over.

Although an executive sponsor is not necessary, it can make your life exponentially easier while you build, launch, and execute your CX program. Your customers don’t always tell you what you want to hear, and that can be difficult for some business functions to handle. When this is the case, some business functions will try to discredit insights altogether if they don’t align with their goals.

Data grows exponentially every year, faster than any company can manage. In 2016, 90% of the world’s data had been created in the previous two years. 80% of that data was unstructured language. The hype of “Big Data” has passed and the focus is now on “Big Insights” – how to manage all the data and make it useful. A company should not be allocating resources to collecting more data through expensive surveys or market research – instead, they should be focused on doing a better job of listening and reacting to what customers are already saying, by unifying the voice of the customer with data that is already readily available.

It’s critical to identify all the available customer interactions and determine value and richness. Be sure to think about all forms of direct and indirect interactions customers have. This includes:

CX3

These channels are just a handful of the most popular avenues customers use to engage with brands. Your company may have more, less, or none of these. Regardless, the focus should be on aggregating as many as possible to create a holistic view of the customer. This does not mean only aggregating your phone calls and chats; this includes every channel where your customers talk with, at, or about your company. You can’t be selective when it comes to analyzing your customers by channel. All customers are important, and they may have different ways of communicating with you.

Imagine if someone only listened to their significant other in the two rooms where they spend the most time, say the family room and kitchen. They would probably have a good understanding of the overall conversations (similar to a company only reviewing calls, chats, and social). However, ignoring them in the dining room, bedroom, kids’ rooms, and backyard, would inevitably lead to serious communication problems.

It’s true that phone, chat, and social data is extremely rich, accessible, and popular, but that doesn’t mean you should ignore other customers. Every channel is important. Each is used by a different customer, in a different manner, and serves a different purpose, some providing more context than others.

You may find your most important customers aren’t always the loudest and may be interacting with you through an obscure channel you never thought about. You need every customer channel to fully understand their experience.

Click here to access Topbox’s detailed study

Better practices for compliance management

The main compliance challenges

We know that businesses and government entities alike struggle to manage compliance requirements. Many have put up with challenges for so long—often with limited resources—that they no longer see how problematic the situation has become.

FIVE COMPLIANCE CHALLENGES YOU MIGHT BE DEALING WITH

01 COMPLIANCE SILOS
It’s not uncommon that, over time, separate activities, roles, and teams develop to address different compliance requirements. There’s often a lack of integration and communication among these teams or individuals. The result is duplicated efforts—and the creation of multiple clumsy and inefficient systems. This is then perpetuated as compliance processes change in response to regulations, mergers and acquisitions, or other internal business re-structuring.

02 NO SINGLE VIEW OF COMPLIANCE ASSURANCE
Siloed compliance systems also make it hard for senior management to get an overview of current compliance activities and perform timely risk assessments. If you can’t get a clear view of compliance risks, then chances are good that a damaging risk will slip under the radar, go unaddressed, or simply be ignored.

03 COBBLED TOGETHER, HOME-GROWN SYSTEMS
Using generalized software, like Excel spreadsheets and Word documents, in addition to shared folders and file systems, might have made sense at one point. But, as requirements become more complex, these systems become more frustrating, inefficient, and risky. Compiling hundreds or thousands of spreadsheets to support compliance management and regulatory reporting is a logistical nightmare (not to mention time-consuming). Spreadsheets are also prone to error and limited because they don’t provide audit trails or activity logs.

04 OLD SOFTWARE, NOT DESIGNED TO KEEP UP WITH FREQUENT CHANGES
You could be struggling with older compliance software products that aren’t designed to deal with constant change. These can be increasingly expensive to upgrade, not the most user-friendly, and difficult to maintain.

05 NOT USING AUTOMATED MONITORING
Many compliance teams are losing out by not using analytics and data automation. Instead, they rely heavily on sample testing to determine if compliance controls and processes are working, so huge amounts of activity data is never actually checked.

Transform your compliance management process

Good news! There’s some practical steps you can take to transform compliance processes and systems so that they become way more efficient and far less expensive and painful.

It’s all about optimizing the interactions of people, processes, and technology around regulatory compliance requirements across the entire organization.

It might not sound simple, but it’s what needs to be done. And, in our experience, it can be achieved without becoming massively time-consuming and expensive. Technology for regulatory compliance management has evolved to unite processes and roles across all aspects of compliance throughout your organization.

Look, for example, at how technology like Salesforce (a cloud-based system with big data analytics) has transformed sales, marketing, and customer service. Now, there’s similar technology which brings together different business units around regulatory compliance to improve processes and collaboration for the better.

Where to start?

Let’s look at what’s involved in establishing a technology-driven compliance management process. One that’s driven by data and fully integrated across your organization.

THE BEST PLACE TO START IS THE END

Step 1: Think about the desired end-state.

First, consider the objectives and the most important outcomes of your new process. How will it impact the different stakeholders? Take the time to clearly define the metrics you’ll use to measure your progress and success.

A few desired outcomes:

  • Accurately measure and manage the costs of regulatory and policy compliance.
  • Track how risks are trending over time, by regulation, and by region.
  • Understand, at any point in time, the effectiveness of compliance-related controls.
  • Standardize approaches and systems for managing compliance requirements and risks across the organization.
  • Efficiently integrate reporting on compliance activities with those of other risk management functions.
  • Create a quantified view of the risks faced due to regulatory compliance failures for executive management.
  • Increase confidence and response times around changing and new regulations.
  • Reduce duplication of efforts and maximize overall efficiency.

NOW, WHAT DO YOU NEED TO SUPPORT YOUR OBJECTIVES?

Step 2: Identify the activities and capabilities that will get you the desired outcomes.

Consider the different parts of the compliance management process below. Then identify the steps you’ll need to take or the changes you’ll need to make to your current activity that will help you achieve your objectives. We’ve put together a cheat sheet to help this along.

Galvanize

IDENTIFY & IMPLEMENT COMPLIANCE CONTROL PROCEDURES

  • 01 Maintain a central library of regulatory requirements and internal corporate policies, allocated to owners and managers.
  • 02 Define control processes and procedures that will ensure compliance with regulations and policies.
  • 03 Link control processes to the corresponding regulations and corporate policies.
  • 04 Assess the risk of control weaknesses and failure to comply with regulations and policies.

RUN TRANSACTIONAL MONITORING ANALYTICS

  • 05 Monitor the effectiveness of controls and compliance activities with data analytics.
  • 06 Get up-to-date confirmation of the effectiveness of controls and compliance from owners with automated questionnaires or certification of adherence statements.

MANAGE RESULTS & RESPOND

  • 07 Manage the entire process of exceptions generated from analytic monitoring and from the generation of questionnaires and certifications.

REPORT RESULTS & UPDATE ASSESSMENTS

  • 08 Use the results of monitoring and exception management to produce risk assessments and trends.
  • 09 Identify new and changing regulations as they occur and update repositories and control and compliance procedures.
  • 10 Report on the current status of compliance management activities from high- to low-detail levels.

IMPROVE THE PROCESS

  • 11 Identify duplicate processes and fix procedures to combine and improve controls and compliance tests.
  • 12 Integrate regulatory compliance risk management, monitoring, and reporting with overall risk management activities.

Eight compliance processes in desperate need of technology

01 Centralize regulations & compliance requirements
A major part of regulatory compliance management is staying on top of countless regulations and all their details. A solid content repository includes not only the regulations themselves, but also related data. By centralizing your regulations and compliance requirements, you’ll be able to start classifying them, so you can eventually search regulations and requirements by type, region of applicability, effective dates, and modification dates.

02 Map to risks, policies, & controls
Classifying regulatory requirements is no good on its own. They need to be connected to risk management, control and compliance processes, and system functionality. This is the most critical part of a compliance management system.

Typically, in order to do this mapping, you need:

  • An assessment of non-compliant risks for each requirement.
  • Defined processes for how each requirement is met.
  • Defined controls that make sure the compliance process is effective in reducing non-compliance risks.
  • Controls mapped to specific analytics monitoring tests that confirm the effectiveness on an ongoing basis.
  • Assigned owners for each mapped requirement. Specific processes and controls may be assigned to sub-owners.

03 Connect to data & use advanced analytics

Using different automated tests to access and analyze data is foundational to a data-driven compliance management approach.

The range of data sources and data types needed to perform compliance monitoring can be humongous. When it comes to areas like FCPA or other anti-bribery and corruption regulations, you might need to access entire populations of purchase and payment transactions, general ledger entries, payroll, and travel and entertainment expenses. And that’s just the internal sources. External sources could include things like the Politically Exposed Persons database or Sanctions Checks.

Extensive suites of tests and analyses can be run against the data to determine whether compliance controls are working effectively and if there are any indications of transactions or activities that fail to comply with regulations. The results of these analyses identify specific anomalies and control exceptions, as well as provide statistical data and trend reports that indicate changes in compliance risk levels.

Truly delivering on this step involves using the right technology since the requirements for accessing and analyzing data for compliance are demanding. Generalized analytic software is seldom able to provide more than basic capabilities, which are far removed from the functionality of specialized risk and control monitoring technologies.

04 Monitor incidents & manage issues

It’s important to quickly and efficiently manage instances once they’re flagged. But systems that create huge amounts of “false positives” or “false negatives” can end up wasting a lot of time and resources. On the other hand, a system that fails to detect high risk activities creates risk of major financial and reputational damage. The monitoring technology you choose should let you fine-tune analytics to flag actual risks and compliance failures and minimize false alarms.

The system should also allow for an issues resolution process that’s timely and maintains the integrity of responses. If the people responsible for resolving a flagged issue don’t do it adequately, an automated workflow should escalate the issues to the next level.

Older software can’t meet the huge range of incident monitoring and issues management requirements. Or it can require a lot of effort and expense to modify the procedures when needed.

05 Manage investigations

As exceptions and incidents are identified, some turn into issues that need in-depth investigation. Software helps this investigation process by allowing the user to document and log activities. It should also support easy collaboration of anyone involved in the investigation process.

Effective security must be in place around access to all aspects of a compliance management system. But it’s extra important to have a high level of security and privacy for the investigation management process.

06 Use surveys, questionnaires & certifications

Going beyond just transactional analysis and monitoring, it’s also important to understand what’s actually happening right now, by collecting the input of those working in the front-lines.

Software that has built-in automated surveys and questionnaires can gather large amounts of current information directly from these individuals in different compliance roles, then quickly interpret the responses.

For example, if you’re required to comply with the Sarbanes-Oxley Act (SOX), you can use automated questionnaires and certifications to collect individual sign-off on SOX control effectiveness questions. That information is consolidated and used to support the SOX certification process far more efficiently than using traditional ways of collecting sign-off.

07 Manage regulatory changes

Regulations change constantly, and to remain compliant, you need to know—quickly— when those changes happen. This is because changes can often mean modifications to your established procedures or controls, and that could impact your entire compliance management process.

A good compliance software system is built to withstand these revisions. It allows for easy updates to existing definitions of controls, processes, and monitoring activities.

Before software, any regulatory changes would involve huge amounts of manual activities, causing backlogs and delays. Now much (if not most) of the regulatory change process can be automated, freeing your time to manage your part of the overall compliance program.

08 Ensure regulatory examination & oversight

No one likes going through compliance reviews by regulatory bodies. It’s even worse if failures or weaknesses surface during the examination.

But if that happens to you, it’s good to know that many regulatory authorities have proven to be more accommodating and (dare we say) lenient when your compliance process is strategic, deliberate, and well designed.

There are huge benefits, in terms of efficiency and cost savings, by using a structured and well-managed regulatory compliance system. But the greatest economic benefit happens when you can avoid a potentially major financial penalty as a result of replacing an inherently unreliable and complicated legacy system with one that’s purpose-built and data-driven.

Click here to access Galvanize’s new White Paper

EIOPA Financial Stability Report July 2020

The unexpected COVID-19 virus outbreak led European countries to shut down major part of their economies aiming at containing the outbreak. Financial markets experienced huge losses and flight-to-quality investment behaviour. Governments and central banks committed to the provision of significant emergency packages to support the economy, as the economic shock, caused by demand and supply disruptions accompanied by its reflection to the financial markets, is expected to challenge economic growth, labour market and the consumer sentiment across Europe for an uncertain period of time.

Amid an unprecedented downward shift of interest rate curves during March, reflecting the flight-to-quality behaviour, credit spreads of corporates and sovereigns increased for riskier assets, leading effectively to a double-hit scenario. Equity markets dramatically dropped showing extreme levels of volatility responding to the uncertainties on virus effects and on the status of government and central banks support programs and their effectiveness. Despite the stressed market environment, there were signs of improvement following the announcements of the support packages and during the course of the initiatives of gradually reopening the economies. The virus outbreak also led to extraordinary working conditions, with part of the services sector working from home, which rises the potential of those conditions being preserved after the virus outbreak, which could decrease demand and market value for commercial real estate investments.

Within this challenging environment, insurers are exposed in terms of solvency risk, profitability risk and reinvestment risk. The sudden reassessment of risk premia and the increase of default risk could trigger large-scale rating downgrades and result in decreased investments’ value for insurers and IORPs, especially for exposures to highly indebted corporates and sovereigns. On the other hand, the risk of ultra-low interest rates for long has further increased. Factoring in the knock on effects of the weakening macro economy, future own funds position of the insurers could be further challenged, due to potential lower levels of profitable new business written accompanied by increased volume of profitable in-force policies being surrendered or lapsed.

Finally, liquidity risk has resurfaced, due to the potential of mass lapse type of events and higher than expected virus and litigation related claims accompanied by the decreased inflows of premiums.

EIOPA1

For the European occupational pension sector, the negative impact of COVID-19 on the asset side is mainly driven by deteriorating equity market prices, as, in a number of Member States, IORPs allocate significant proportions of the asset portfolio (up to nearly 60%) in equity investments. However, the investment allocation is highly divergent amongst Member States, so that IORPs in other Member States hold up to 70% of their investments in bonds, mostly sovereign bonds, where the widening of credit spreads impair their market value. The liability side is already pressured due to low interest rates and, where market-consistent valuation is applied, due to low discount rates. The funding and solvency ratios of IORPs are determined by national law and, as could be seen in the 2019 IORP stress test results, have been under pressure and are certainly negatively impacted by this crisis. The current situation may lead to benefit cuts for members and may require sponsoring undertakings to finance funding gaps, which may lead to additional pressure on the real economy and on entities sponsoring an IORP.

EIOPA2

Climate risks remain one of the focal points for the insurance and pension industry, with Environmental, Social and Governance (ESG) factors increasingly shaping investment decisions of insurers and pension funds but also affecting their underwriting. In response to climate related risks, the EU presented in mid-December the European Green Deal, a roadmap for making the EU climate neutral by 2050, providing actions meant to boost the efficient use of resources by

  • moving to a clean, circular economy and stop climate change,
  • revert biodiversity loss
  • and cut pollution.

At the same time, natural catastrophe related losses were milder than previous year, but asymmetrically shifted towards poorer countries lacking relevant insurance coverages.

Cyber risks have become increasingly relevant across the financial system in particular during the virus outbreak due to the new working conditions that the confinement measures imposed. Amid the extraordinary en masse remote working arrangements an increased number of cyber-attacks has been reported on both individuals and healthcare systems. With increasing attention for cyber risks both at national and European level, EIOPA contributed to building a strong, reliable, cyber insurance market by publishing its strategy for cyber underwriting and has also been actively involved in promoting cyber resilience in the insurance and pensions sectors.

Click here to access EIOPA’s detailed Financial Stability Report July 2020