A Practical Guide to Analytics and AI in the Cloud With Legacy Data

Introduction

Businesses that use legacy data sources such as mainframe have invested heavily in building a reliable data platform. At the same time, these enterprises want to move data into the cloud for the latest in analytics, data science and machine learning.

The Importance of Legacy Data

Mainframe is still the processing backbone for many organizations, constantly generating important business data.

It’s crucial to consider the following:

MAINFRAME IS THE ENTERPRISE TRANSACTION ENVIRONMENT

In 2019, there was a 55% increase in transaction volume on mainframe environments. Studies estimate that 2.5 billion transactions are run per day, per legacy system across the world.

LEGACY IS THE FUEL BEHIND CUSTOMER EXPERIENCES

Within industries such as financial services and insurance, most customer information lives on legacy systems. Over 70% of enterprises say their customer-facing applications are completely or very reliant on mainframe processing.

BUSINESS-CRITICAL APPLICATIONS RUN ON LEGACY SYSTEMS

Mainframe often holds business-critical information and applications — from credit card transactions to claims processing. For over half of enterprises with a mainframe, they run more than half of their business-critical applications on the platform.

However, they also present a limitation for an organization in its analytics and data science journey. While moving everything to the cloud may not be the answer, identifying ways in which you can start a legacy modernization process is crucial to the next generation of data and AI initiatives.

The Cost of Legacy Data

Across the enterprise, legacy systems such as mainframe serve as a critical piece of infrastructure that is ripe with opportunity for integration with modern analytics platforms. If a modern analytics platform is only as good as the data fed into it, that means enterprises must include all data sources for success. However, many complexities can occur when organizations look to build the data integration pipelines between their modern analytics platform and legacy sources. As a result, the plans made to connect these two areas are often easier said than done.

DATA SILOS HINDER INNOVATION

Over 60% of IT professionals with legacy and modern technology in house are finding that data silos are negatively affecting their business. As data volumes increase, IT can no longer rely on current data integration approaches to solve their silo challenges.

CLOUDY BUSINESS INSIGHTS

Business demands that more decisions are driven by data. Still, few IT professionals who work with legacy systems feel they are successful in delivering data insights that reside outside their immediate department. Data-driven insights will be the key to competitive success. The inability to provide insights puts a business at risk.

SKILLS GAP WIDENS

While it may be difficult to find skills for the latest technology, it’s becoming even harder to find skills for legacy platforms. Enterprises have only replaced 37% of the mainframe workforce lost over the past five years. As a result, the knowledge needed to integrate mainframe data into analytics platforms is disappearing. While the drive for building a modern analytics platform is more powerful than ever, taking this initiative and improving data integration practices that encompass all enterprise data has never been more challenging.

The success of building a modern analytics platform hinges on understanding the common challenges of integrating legacy data sources and choosing the right technologies that can scale with the changing needs of your organization.

Challenges Specific to Extracting Mainframe Data

With so much valuable data on mainframe, the most logical thing to do would be to connect these legacy data sources to a modern data platform. However, many complexities can occur when organizations begin to build integration pipelines to legacy sources. As a result, the plans made to connect these two areas are often easier said than done. Shared challenges of extracting mainframe data for integration with modern analytics platforms include the following:

DATA STRUCTURE

It’s common for legacy data not to be readily compatible with downstream analytics platforms, open-source frameworks and data formats. The varied structures of legacy data sources differ from relational data. Legacy data sources have traits such as

  • hierarchical tables,
  • embedded headers,
  • and trailer and complex data structures (e.g., nested, repeated or redefined elements).

With the incorrect COBOL redefines and logic set up at the start of a data integration workflow, legacy data structures risk slowing down processing speeds to the point of business disruption and can lead to incorrect data for downstream consumption.

METADATA

COBOL copybooks can be a massive hurdle to overcome for integrating mainframe data. COBOL copybooks are the metadata blocks that define the physical layout of data but are stored separately from that data. As a result, they can be quite complicated, containing not just formatting information, but also logic in the form, for example, of nested Occurs Depending On clauses. For many mainframe files, hundreds of copybooks may map to a single file. Feeding mainframe data directly into an analytics platform can result in significant data confusion.

DATA MAPPING

Unlike an RDBMS, which needs data to be entered into a table or column, nothing enforces a set data structure on the mainframe. COBOL copybooks are incredibly flexible so that they

  • can group multiple pieces into one,
  • or subdivide a field into various fields,
  • or ignore whole sections of a record.

As a result, data mapping issues will arise. The copybooks reflect the needs of the program, not the needs of a data-driven view.

DIFFERENT STORAGE FORMATS

Often numeric values stored one way on a mainframe are stored differently when the data is moving to the cloud. Additionally, mainframes use a whole different encoding scheme (EBCDIC vs. ASCII) — it’s an 8-bit structure vs. a 7-bit structure. As a result, multiple numeric encoding schemes allow for the ability to “pack” numbers into less storage (e.g., packed decimal) space. In addition to complex storage formats, there are techniques to use each individual bit to store data.

Whether it’s a lack of internal knowledge on how to handle legacy data or a rigid data framework, ignoring legacy data when building a modern data analytics platform means missing valuable information that can enhance any analytics project.

Pain Points of Building a Modern Analytics Platform

Tackling the challenges of mainframe data integration is no simple task. Besides determining the best approach for integrating these legacy data sources, IT departments are also dealing with the everyday challenges of running a department. Regardless of the size of an organization, there are daily struggles everyone faces, from siloed data to lack of IT skills.

ENVIRONMENT COMPLEXITY

Many organizations have adopted hybrid and multi-cloud strategies to

  • manage data proliferation,
  • gain flexibility,
  • reduce costs
  • and increase capacities.

Cloud storage and the lakehouse architecture offer new ways to manage and store data. However, organizations still need to maintain and integrate their mainframes and other on-premises systems — resulting in a challenging integration strategy that must encompass a variety of environments.

SILOED DATA

The increase in data silos adds further complexity to growing data volumes. Data silo creation happens as a direct result of increasing data sources. Research has shown that data silos have directly inhibited the success of analytics and machine learning projects.

PERFORMANCE

Processing the requirements of growing data volumes can cause a slowdown in a data stream. Loading hundreds, or even thousands, of database tables into a big data platform — combined with an inefficient use of system resources — can create a data bottleneck that hampers the performance of data integration pipelines.

DATA QUALITY

Industry studies have shown that up to 90% of a data scientist’s time is getting data to the right condition for use in analytics. In other words, 90% of the time, data feeding analytics cannot be trusted. Data quality processes that include

  • mapping,
  • matching,
  • linking,
  • merging,
  • deduplication
  • and actionable data

are critical to providing frameworks with trusted data.

DATA TYPES AND FORMATS

Valuable data for analytics comes from a range of sources across the organization from CRM, ERPs, mainframes and online transaction processing systems. However, as organizations rely on more systems, the data types and formats continue to grow.

IT now has the challenge of making big data, NoSQL and unstructured data all readable for downstream analytics solutions.

SKILLS GAP AND RESOURCES

The need for workers who understand how to build data integration frameworks for mainframe, cloud, and cluster data sources is increasing, but the market cannot keep up. Studies have shown that unfilled data engineer jobs and data scientist jobs have increased 12x in the past year alone. As a result, IT needs to figure out how to integrate data for analytics with the skills they have internally.

What Your Cloud Data Platform Needs

A new data management paradigm has emerged that combines the best elements of data lakes and data warehouses, enabling

  • analytics,
  • data science
  • and machine learning

on all your business data: lakehouse.

Lakehouses are enabled by a new system design: implementing similar data structures and data management features to those in a data warehouse, directly on the kind of low-cost storage used for data lakes. They are what you would get if you had to redesign data warehouses in the modern world, now that cheap and highly reliable storage (in the form of object stores) are available.

This new paradigm is the vision for data management that provides the best architecture for modern analytics and AI. It will help organizations capture data from hundreds of sources, including legacy systems, and make that data available and ready for analytics, data science and machine learning.

Lakehouse

A lakehouse has the following key features:

  • Open storage formats, such as Parquet, avoid lock-in and provide accessibility to the widest variety of analytics tools and applications
  • Decoupled storage and compute provides the ability to scale to many concurrent users by adding compute clusters that all access the same storage cluster
  • Transaction support handles failure scenarios and provides consistency when multiple jobs concurrently read and write data
  • Schema management enforces the expected schema when needed and handles evolving schemas as they change over time
  • Business intelligence tools directly access the lakehouse to query data, enabling access to the latest data without the cost and complexity of replicating data across a data lake and a data warehouse
  • Data science and machine learning tools used for advanced analytics rely on the same data repository
  • First-class support for all data types across structured, semi-structured and unstructured, plus batch and streaming data

Click here to access Databricks’ and Precisely’s White Paper

How To Build a CX Program And Transform Your Business

Customer Experience (CX) is a catchy business term that has been used for decades, and until recently, measuring and managing it was not possible. Now, with the evolution of technology, a company can build and operationalize a true CX program.

For years, companies championed NPS surveys, CSAT scores, web feedback, and other sources of data as the drivers of “Customer Experience” – however, these singular sources of data don’t give a true, comprehensive view of how customers feel, think, and act. Unfortunately, most companies aren’t capitalizing on the benefits of a CX program. Less than 10% of companies have a CX executive and of those companies, only 14% believe Customer Experience, as a program, is the aggregation and analysis of all customer interactions with the objective of uncovering and disseminating insights across the company in order to improve the experience. In a time where the customer experience separates the winners from the losers, CX must be more of a priority for ALL businesses.

This not only includes the analysis of typical channels in which customers directly interact with your company (calls, chats, emails, feedback, surveys, etc.) but all the channels in which customers may not be interacting directly with you – social, reviews, blogs, comment boards, media, etc.

CX1

In order to understand the purpose of a CX team and how it operates, you first need to understand how most businesses organize, manage, and carry out their customer experiences today.

Essentially, a company’s customer experience is owned and managed by a handful of teams. This includes, but is not limited to:

  • digital,
  • brand,
  • strategy,
  • UX,
  • retail,
  • design,
  • pricing,
  • membership,
  • logistics,
  • marketing,
  • and customer service.

All of these teams have a hand in customer experience.

In order to affirm that they are working towards a common goal, they must

  1. communicate in a timely manner,
  2. meet and discuss upcoming initiatives and projects,
  3. and discuss results along with future objectives.

In a perfect world, every team has the time and passion to accomplish these tasks to ensure the customer experience is in sync with their work. In reality, teams end up scrambling for information and understanding of how each business function is impacting the customer experience – sometimes after the CX program has already launched.

CX2

This process is extremely inefficient and can lead to serious problems across the customer experience. These problems can lead to irreparable financial losses. If business functions are not on the same page when launching an experience, it creates a broken one for customers. Siloed teams create siloed experiences.

There are plenty of companies that operate in a semi-siloed manner and feel it is successful. What these companies don’t understand is that customer experience issues often occur between the ownership of these silos, in what some refer to as the “customer experience abyss,” where no business function claims ownership. Customers react to these broken experiences by communicating their frustration through different communication channels (chats, surveys, reviews, calls, tweets, posts etc.).

For example, if a company launches a new subscription service and customers are confused about the pricing model, is it the job of customer service to explain it to customers?  What about those customers that don’t contact the business at all? Does marketing need to modify their campaigns? Maybe digital needs to edit the nomenclature online… It could be all of these things. The key is determining which will solve the poor customer experience.

The objective of a CX program is to focus deeply on what customers are saying and shift business teams to become advocates for what they say. Once advocacy is achieved, the customer experience can be improved at scale with speed and precision. A premium customer experience is the key to company growth and customer retention. How important is the customer experience?

You may be saying to yourself, “We already have teams examining our customer data, no
need to establish a new team to look at it.” While this may be true, the teams are likely taking a siloed approach to analyzing customer data by only investigating the portion of the data they own.

For example, the social team looks at social data, the digital team analyzes web feedback and analytics, the marketing team reviews surveys and performs studies, etc. Seldom do these teams come together and combine their data to get a holistic view of the customer. Furthermore, when it comes to prioritizing CX improvements, they do so based on an incomplete view of the customer.

Consolidating all customer data gives a unified view of your customers while lessening the workload and increasing the rate at which insights are generated. The experience customers have with marketing, digital, and customer service, all lead to different interactions. Breaking these interactions into different, separate components is the reason companies struggle with understanding the true customer experience and miss the big picture on how to improve it.

The CX team, once established, will be responsible for creating a unified view of the customer which will provide the company with an unbiased understanding of how customers feel about their experiences as well as their expectations of the industry. These insights will provide awareness, knowledge, and curiosity that will empower business functions to improve the end-to-end customer experience.

CX programs are disruptive. A successful CX program will uncover insights that align with current business objectives and some insights that don’t at all. So, what do you do when you run into that stone wall? How do you move forward when a business function refuses to adopt the voice of the customer? Call in back-up from an executive who understands the value of the voice of the customer and why it needs to be top-of mind for every function.

When creating a disruptive program like CX, an executive owner is needed to overcome business hurdles along the way. Ideally, this executive owner will support the program and promote it to the broader business functions. In order to scale and become more widely adopted, it is also helpful to have executive support when the program begins.

The best candidates for initial ownership are typically marketing, analytics or operations executives. Along with understanding the value a CX program can offer, they should also understand the business’ current data landscape and help provide access to these data sets. Once the CX team has access to all the available customer data, it will be able to aggregate all necessary interactions.

Executive sponsors will help dramatically in regard to CX program adoption and eventual scaling. Executive sponsors

  • can provide the funding to secure the initial success,
  • promote the program to ensure other business functions work closer to the program,
  • and remove roadblocks that may otherwise take weeks to get over.

Although an executive sponsor is not necessary, it can make your life exponentially easier while you build, launch, and execute your CX program. Your customers don’t always tell you what you want to hear, and that can be difficult for some business functions to handle. When this is the case, some business functions will try to discredit insights altogether if they don’t align with their goals.

Data grows exponentially every year, faster than any company can manage. In 2016, 90% of the world’s data had been created in the previous two years. 80% of that data was unstructured language. The hype of “Big Data” has passed and the focus is now on “Big Insights” – how to manage all the data and make it useful. A company should not be allocating resources to collecting more data through expensive surveys or market research – instead, they should be focused on doing a better job of listening and reacting to what customers are already saying, by unifying the voice of the customer with data that is already readily available.

It’s critical to identify all the available customer interactions and determine value and richness. Be sure to think about all forms of direct and indirect interactions customers have. This includes:

CX3

These channels are just a handful of the most popular avenues customers use to engage with brands. Your company may have more, less, or none of these. Regardless, the focus should be on aggregating as many as possible to create a holistic view of the customer. This does not mean only aggregating your phone calls and chats; this includes every channel where your customers talk with, at, or about your company. You can’t be selective when it comes to analyzing your customers by channel. All customers are important, and they may have different ways of communicating with you.

Imagine if someone only listened to their significant other in the two rooms where they spend the most time, say the family room and kitchen. They would probably have a good understanding of the overall conversations (similar to a company only reviewing calls, chats, and social). However, ignoring them in the dining room, bedroom, kids’ rooms, and backyard, would inevitably lead to serious communication problems.

It’s true that phone, chat, and social data is extremely rich, accessible, and popular, but that doesn’t mean you should ignore other customers. Every channel is important. Each is used by a different customer, in a different manner, and serves a different purpose, some providing more context than others.

You may find your most important customers aren’t always the loudest and may be interacting with you through an obscure channel you never thought about. You need every customer channel to fully understand their experience.

Click here to access Topbox’s detailed study

Better practices for compliance management

The main compliance challenges

We know that businesses and government entities alike struggle to manage compliance requirements. Many have put up with challenges for so long—often with limited resources—that they no longer see how problematic the situation has become.

FIVE COMPLIANCE CHALLENGES YOU MIGHT BE DEALING WITH

01 COMPLIANCE SILOS
It’s not uncommon that, over time, separate activities, roles, and teams develop to address different compliance requirements. There’s often a lack of integration and communication among these teams or individuals. The result is duplicated efforts—and the creation of multiple clumsy and inefficient systems. This is then perpetuated as compliance processes change in response to regulations, mergers and acquisitions, or other internal business re-structuring.

02 NO SINGLE VIEW OF COMPLIANCE ASSURANCE
Siloed compliance systems also make it hard for senior management to get an overview of current compliance activities and perform timely risk assessments. If you can’t get a clear view of compliance risks, then chances are good that a damaging risk will slip under the radar, go unaddressed, or simply be ignored.

03 COBBLED TOGETHER, HOME-GROWN SYSTEMS
Using generalized software, like Excel spreadsheets and Word documents, in addition to shared folders and file systems, might have made sense at one point. But, as requirements become more complex, these systems become more frustrating, inefficient, and risky. Compiling hundreds or thousands of spreadsheets to support compliance management and regulatory reporting is a logistical nightmare (not to mention time-consuming). Spreadsheets are also prone to error and limited because they don’t provide audit trails or activity logs.

04 OLD SOFTWARE, NOT DESIGNED TO KEEP UP WITH FREQUENT CHANGES
You could be struggling with older compliance software products that aren’t designed to deal with constant change. These can be increasingly expensive to upgrade, not the most user-friendly, and difficult to maintain.

05 NOT USING AUTOMATED MONITORING
Many compliance teams are losing out by not using analytics and data automation. Instead, they rely heavily on sample testing to determine if compliance controls and processes are working, so huge amounts of activity data is never actually checked.

Transform your compliance management process

Good news! There’s some practical steps you can take to transform compliance processes and systems so that they become way more efficient and far less expensive and painful.

It’s all about optimizing the interactions of people, processes, and technology around regulatory compliance requirements across the entire organization.

It might not sound simple, but it’s what needs to be done. And, in our experience, it can be achieved without becoming massively time-consuming and expensive. Technology for regulatory compliance management has evolved to unite processes and roles across all aspects of compliance throughout your organization.

Look, for example, at how technology like Salesforce (a cloud-based system with big data analytics) has transformed sales, marketing, and customer service. Now, there’s similar technology which brings together different business units around regulatory compliance to improve processes and collaboration for the better.

Where to start?

Let’s look at what’s involved in establishing a technology-driven compliance management process. One that’s driven by data and fully integrated across your organization.

THE BEST PLACE TO START IS THE END

Step 1: Think about the desired end-state.

First, consider the objectives and the most important outcomes of your new process. How will it impact the different stakeholders? Take the time to clearly define the metrics you’ll use to measure your progress and success.

A few desired outcomes:

  • Accurately measure and manage the costs of regulatory and policy compliance.
  • Track how risks are trending over time, by regulation, and by region.
  • Understand, at any point in time, the effectiveness of compliance-related controls.
  • Standardize approaches and systems for managing compliance requirements and risks across the organization.
  • Efficiently integrate reporting on compliance activities with those of other risk management functions.
  • Create a quantified view of the risks faced due to regulatory compliance failures for executive management.
  • Increase confidence and response times around changing and new regulations.
  • Reduce duplication of efforts and maximize overall efficiency.

NOW, WHAT DO YOU NEED TO SUPPORT YOUR OBJECTIVES?

Step 2: Identify the activities and capabilities that will get you the desired outcomes.

Consider the different parts of the compliance management process below. Then identify the steps you’ll need to take or the changes you’ll need to make to your current activity that will help you achieve your objectives. We’ve put together a cheat sheet to help this along.

Galvanize

IDENTIFY & IMPLEMENT COMPLIANCE CONTROL PROCEDURES

  • 01 Maintain a central library of regulatory requirements and internal corporate policies, allocated to owners and managers.
  • 02 Define control processes and procedures that will ensure compliance with regulations and policies.
  • 03 Link control processes to the corresponding regulations and corporate policies.
  • 04 Assess the risk of control weaknesses and failure to comply with regulations and policies.

RUN TRANSACTIONAL MONITORING ANALYTICS

  • 05 Monitor the effectiveness of controls and compliance activities with data analytics.
  • 06 Get up-to-date confirmation of the effectiveness of controls and compliance from owners with automated questionnaires or certification of adherence statements.

MANAGE RESULTS & RESPOND

  • 07 Manage the entire process of exceptions generated from analytic monitoring and from the generation of questionnaires and certifications.

REPORT RESULTS & UPDATE ASSESSMENTS

  • 08 Use the results of monitoring and exception management to produce risk assessments and trends.
  • 09 Identify new and changing regulations as they occur and update repositories and control and compliance procedures.
  • 10 Report on the current status of compliance management activities from high- to low-detail levels.

IMPROVE THE PROCESS

  • 11 Identify duplicate processes and fix procedures to combine and improve controls and compliance tests.
  • 12 Integrate regulatory compliance risk management, monitoring, and reporting with overall risk management activities.

Eight compliance processes in desperate need of technology

01 Centralize regulations & compliance requirements
A major part of regulatory compliance management is staying on top of countless regulations and all their details. A solid content repository includes not only the regulations themselves, but also related data. By centralizing your regulations and compliance requirements, you’ll be able to start classifying them, so you can eventually search regulations and requirements by type, region of applicability, effective dates, and modification dates.

02 Map to risks, policies, & controls
Classifying regulatory requirements is no good on its own. They need to be connected to risk management, control and compliance processes, and system functionality. This is the most critical part of a compliance management system.

Typically, in order to do this mapping, you need:

  • An assessment of non-compliant risks for each requirement.
  • Defined processes for how each requirement is met.
  • Defined controls that make sure the compliance process is effective in reducing non-compliance risks.
  • Controls mapped to specific analytics monitoring tests that confirm the effectiveness on an ongoing basis.
  • Assigned owners for each mapped requirement. Specific processes and controls may be assigned to sub-owners.

03 Connect to data & use advanced analytics

Using different automated tests to access and analyze data is foundational to a data-driven compliance management approach.

The range of data sources and data types needed to perform compliance monitoring can be humongous. When it comes to areas like FCPA or other anti-bribery and corruption regulations, you might need to access entire populations of purchase and payment transactions, general ledger entries, payroll, and travel and entertainment expenses. And that’s just the internal sources. External sources could include things like the Politically Exposed Persons database or Sanctions Checks.

Extensive suites of tests and analyses can be run against the data to determine whether compliance controls are working effectively and if there are any indications of transactions or activities that fail to comply with regulations. The results of these analyses identify specific anomalies and control exceptions, as well as provide statistical data and trend reports that indicate changes in compliance risk levels.

Truly delivering on this step involves using the right technology since the requirements for accessing and analyzing data for compliance are demanding. Generalized analytic software is seldom able to provide more than basic capabilities, which are far removed from the functionality of specialized risk and control monitoring technologies.

04 Monitor incidents & manage issues

It’s important to quickly and efficiently manage instances once they’re flagged. But systems that create huge amounts of “false positives” or “false negatives” can end up wasting a lot of time and resources. On the other hand, a system that fails to detect high risk activities creates risk of major financial and reputational damage. The monitoring technology you choose should let you fine-tune analytics to flag actual risks and compliance failures and minimize false alarms.

The system should also allow for an issues resolution process that’s timely and maintains the integrity of responses. If the people responsible for resolving a flagged issue don’t do it adequately, an automated workflow should escalate the issues to the next level.

Older software can’t meet the huge range of incident monitoring and issues management requirements. Or it can require a lot of effort and expense to modify the procedures when needed.

05 Manage investigations

As exceptions and incidents are identified, some turn into issues that need in-depth investigation. Software helps this investigation process by allowing the user to document and log activities. It should also support easy collaboration of anyone involved in the investigation process.

Effective security must be in place around access to all aspects of a compliance management system. But it’s extra important to have a high level of security and privacy for the investigation management process.

06 Use surveys, questionnaires & certifications

Going beyond just transactional analysis and monitoring, it’s also important to understand what’s actually happening right now, by collecting the input of those working in the front-lines.

Software that has built-in automated surveys and questionnaires can gather large amounts of current information directly from these individuals in different compliance roles, then quickly interpret the responses.

For example, if you’re required to comply with the Sarbanes-Oxley Act (SOX), you can use automated questionnaires and certifications to collect individual sign-off on SOX control effectiveness questions. That information is consolidated and used to support the SOX certification process far more efficiently than using traditional ways of collecting sign-off.

07 Manage regulatory changes

Regulations change constantly, and to remain compliant, you need to know—quickly— when those changes happen. This is because changes can often mean modifications to your established procedures or controls, and that could impact your entire compliance management process.

A good compliance software system is built to withstand these revisions. It allows for easy updates to existing definitions of controls, processes, and monitoring activities.

Before software, any regulatory changes would involve huge amounts of manual activities, causing backlogs and delays. Now much (if not most) of the regulatory change process can be automated, freeing your time to manage your part of the overall compliance program.

08 Ensure regulatory examination & oversight

No one likes going through compliance reviews by regulatory bodies. It’s even worse if failures or weaknesses surface during the examination.

But if that happens to you, it’s good to know that many regulatory authorities have proven to be more accommodating and (dare we say) lenient when your compliance process is strategic, deliberate, and well designed.

There are huge benefits, in terms of efficiency and cost savings, by using a structured and well-managed regulatory compliance system. But the greatest economic benefit happens when you can avoid a potentially major financial penalty as a result of replacing an inherently unreliable and complicated legacy system with one that’s purpose-built and data-driven.

Click here to access Galvanize’s new White Paper

EIOPA Financial Stability Report July 2020

The unexpected COVID-19 virus outbreak led European countries to shut down major part of their economies aiming at containing the outbreak. Financial markets experienced huge losses and flight-to-quality investment behaviour. Governments and central banks committed to the provision of significant emergency packages to support the economy, as the economic shock, caused by demand and supply disruptions accompanied by its reflection to the financial markets, is expected to challenge economic growth, labour market and the consumer sentiment across Europe for an uncertain period of time.

Amid an unprecedented downward shift of interest rate curves during March, reflecting the flight-to-quality behaviour, credit spreads of corporates and sovereigns increased for riskier assets, leading effectively to a double-hit scenario. Equity markets dramatically dropped showing extreme levels of volatility responding to the uncertainties on virus effects and on the status of government and central banks support programs and their effectiveness. Despite the stressed market environment, there were signs of improvement following the announcements of the support packages and during the course of the initiatives of gradually reopening the economies. The virus outbreak also led to extraordinary working conditions, with part of the services sector working from home, which rises the potential of those conditions being preserved after the virus outbreak, which could decrease demand and market value for commercial real estate investments.

Within this challenging environment, insurers are exposed in terms of solvency risk, profitability risk and reinvestment risk. The sudden reassessment of risk premia and the increase of default risk could trigger large-scale rating downgrades and result in decreased investments’ value for insurers and IORPs, especially for exposures to highly indebted corporates and sovereigns. On the other hand, the risk of ultra-low interest rates for long has further increased. Factoring in the knock on effects of the weakening macro economy, future own funds position of the insurers could be further challenged, due to potential lower levels of profitable new business written accompanied by increased volume of profitable in-force policies being surrendered or lapsed.

Finally, liquidity risk has resurfaced, due to the potential of mass lapse type of events and higher than expected virus and litigation related claims accompanied by the decreased inflows of premiums.

EIOPA1

For the European occupational pension sector, the negative impact of COVID-19 on the asset side is mainly driven by deteriorating equity market prices, as, in a number of Member States, IORPs allocate significant proportions of the asset portfolio (up to nearly 60%) in equity investments. However, the investment allocation is highly divergent amongst Member States, so that IORPs in other Member States hold up to 70% of their investments in bonds, mostly sovereign bonds, where the widening of credit spreads impair their market value. The liability side is already pressured due to low interest rates and, where market-consistent valuation is applied, due to low discount rates. The funding and solvency ratios of IORPs are determined by national law and, as could be seen in the 2019 IORP stress test results, have been under pressure and are certainly negatively impacted by this crisis. The current situation may lead to benefit cuts for members and may require sponsoring undertakings to finance funding gaps, which may lead to additional pressure on the real economy and on entities sponsoring an IORP.

EIOPA2

Climate risks remain one of the focal points for the insurance and pension industry, with Environmental, Social and Governance (ESG) factors increasingly shaping investment decisions of insurers and pension funds but also affecting their underwriting. In response to climate related risks, the EU presented in mid-December the European Green Deal, a roadmap for making the EU climate neutral by 2050, providing actions meant to boost the efficient use of resources by

  • moving to a clean, circular economy and stop climate change,
  • revert biodiversity loss
  • and cut pollution.

At the same time, natural catastrophe related losses were milder than previous year, but asymmetrically shifted towards poorer countries lacking relevant insurance coverages.

Cyber risks have become increasingly relevant across the financial system in particular during the virus outbreak due to the new working conditions that the confinement measures imposed. Amid the extraordinary en masse remote working arrangements an increased number of cyber-attacks has been reported on both individuals and healthcare systems. With increasing attention for cyber risks both at national and European level, EIOPA contributed to building a strong, reliable, cyber insurance market by publishing its strategy for cyber underwriting and has also been actively involved in promoting cyber resilience in the insurance and pensions sectors.

Click here to access EIOPA’s detailed Financial Stability Report July 2020

Stress Testing 2.0: Better Informed Decisions Through Expanded Scenario-Based Risk Management

Turning a Regulatory Requirement Into Competitive Advantage

Mandated enterprise stress testing – the primary macro-prudential tool that emerged from the 2008 financial crisis – helps regulators address concerns about the state of the banking industry and its impact on the local and global financial system. These regulatory stress tests typically focus on the largest banking institutions and involve a limited set of prescribed downturn scenarios.

Regulatory stress testing requires a significant investment by financial institutions – in technology, skilled people and time. And the stress testing process continues to become even more complex as programs mature and regulatory expectations keep growing.

The question is, what’s the best way to go about stress testing, and what other benefits can banks realize from this investment? Equally important, should you view stress testing primarily as a regulatory compliance tool? Or can banks harness it as a management tool that links corporate planning and risk appetite – and democratizes scenariobased analysis across the institution for faster, better business decisions?

These are important questions for every bank executive and risk officer to answer because justifying large financial investments in people and technology solely to comply with periodic regulatory requirements can be difficult. Not that noncompliance is ever an option; failure can result in severe damage to reputation and investor confidence.

But savvy financial institutions are looking for – and realizing – a significant return on investment by reaching beyond simple compliance. They are seeing more effective, consistent analytical processes and the ability to address complex questions from senior management (e.g., the sensitivity of financial performance to changes in macroeconomic factors). Their successes provide a road map for those who are starting to build – or are rethinking their approach to – their stress testing infrastructure.

This article reviews the maturation of regulatory stress test regimes and explores diverse use cases where stress testing (or, more broadly, scenario-based analysis) may provide value beyond regulatory stress testing.

Comprehensive Capital Assessments: A Daunting Exercise

The regulatory stress test framework that emerged following the 2008 financial crisis – that banks perform capital adequacy-oriented stress testing over a multiperiod forecast horizon – is summarized in Figure 1. At each period, a scenario exerts its impact on the net profit or loss based on the

  • as-of-date business,
  • including portfolio balances,
  • exposures,
  • and operational income and costs.

The net profit or loss, after being adjusted by other financial obligations and management actions, will determine the capital that is available for the next period on the scenario path.

SAS1

Note that the natural evolution of the portfolio and business under a given scenario leads to a state of the business at the next horizon, which then starts a new evaluation of the available capital. The risk profile of this business evaluation also determines the capital requirement under the same scenario. The capital adequacy assessment can be performed through this dynamic analysis of capital supply and demand.

This comprehensive capital assessment requires cooperation from various groups across business and finance in an institution. But it becomes a daunting exercise on a multiperiod scenario because of the forward-looking and path-dependent nature of the analysis. For this reason, some jurisdictions began the exercise with only one horizon. Over time, these requirements have been revised to cover at least two horizons, which allows banks to build more realistic business dynamics into their analysis.

Maturing and Optimizing Regulatory Stress Testing

Stress testing – now a standard supervisory tool – has greatly improved banking sector resilience. In regions where stress testing capabilities are more mature, banks have built up adequate capital and have performed well in recent years. For example, the board of governors for both the US Federal Reserve System and Bank of England announced good results for their recent stress tests on large banks.

As these programs mature, many jurisdictions are raising their requirements, both quantitively and qualitatively. For example:

  • US CCAR and Bank of England stress tests now require banks to carry out tests on institution-specific scenarios, in addition to prescribed regulatory scenarios.
  • The regions adopting IFRS 9, including the EU, Canada and the UK, are now required to incorporate IFRS 9 estimates into regulatory stress tests. Likewise, banks subject to stress testing in the US will need to incorporate CECL estimates into their capital adequacy tests.
  • Liquidity risk has been incorporated into stress tests – especially as part of resolution and recovery planning – in regions like the US and UK.
  • Jurisdictions in Asia (such as Taiwan) have extended the forecast horizons for their regulatory stress tests.

In addition, stress testing and scenario analysis are now part of Pillar 2 in the Internal Capital Adequacy Assessment Process (ICAAP) published by the Basel Committee on Banking Supervision. Institutions are expected to use stress tests and scenario analyses to improve their understanding of the vulnerabilities that they face under a wide range of adverse conditions. Further uses of regulatory stress testing include the scenariobased analysis for Interest Rate Risk in the Banking Book (IRRBB).

Finally, the goal of regulatory stress testing is increasingly extending beyond completing a simple assessment. Management must prepare a viable mitigation plan should an adverse condition occur. Some regions also require companies to develop “living wills” to ensure the orderly wind-down of institutions and to prevent systemic contagion from an institutional failure.

All of these demands will require the adoption of new technologies and best practices.

Exploring Enhanced Use Cases for Stress Testing Capabilities

As noted by the Basel Committee on Banking Supervision in its 2018 publication Stress Testing Principles, “Stress testing is now a critical element of risk management for banks and a core tool for banking supervisors and macroprudential authorities.” As stress testing capabilities have matured, people are exploring how to use these capabilities for strategic business purposes – for example, to perform “internal stress testing.”

The term “internal stress testing” can seem ambiguous. Some stakeholders don’t understand the various use cases for applying scenario-based analyses beyond regulatory stress testing or doubt the strategic value to internal management and planning. Others think that developing a scenario-based analytics infrastructure that is useful across the enterprise is just too difficult or costly.

But there are, in fact, many high-impact strategic use cases for stress testing across the enterprise, including:

  1. Financial planning.
  2. Risk appetite management.
  3. What-if and sensitivity analysis.
  4. Emerging risk identification.
  5. Reverse stress testing.

Financial Planning

Stress testing is one form of scenario-based analysis. But scenario-based analysis is also useful for forward-looking financial planning exercises on several fronts:

  • The development of business plans and management actions are already required as part of regulatory stress testing, so it’s natural to align these processes with internal planning and strategic management.
  • Scenario-based analyses lay the foundation for assessing and communicating the impacts of changing environmental factors and portfolio shifts on the institution’s financial performance.
  • At a more advanced level, banks can incorporate scenario-based planning with optimization techniques to find an optimal portfolio strategy that performs robustly across a range of scenarios.

Here, banks can leverage the technologies and processes used for regulatory stress testing. However, both the infrastructure and program processes must be developed with flexibility in mind – so that both business-as-usual scenarios and alternatives can be easily managed, and the models and assumptions can be adjusted.

Risk Appetite Management

A closely related topic to stress testing and capital planning is risk appetite. Risk appetite defines the level of risk an institution is willing to take to achieve its financial objectives. According to Senior Supervisors Group (2008), a clearly articulated risk appetite helps financial institutions properly understand, monitor, and communicate risks internally and externally.

Figure 2 illustrates the dynamic relationship between stress testing, risk appetite and capital planning. Note that:

  • Risk appetite is defined by the institution to reflect its capital strategy, return targets and its tolerance for risk.
  • Capital planning is conducted in alignment with the stated risk appetite and risk policy.
  • Scenario-based analyses are then carried out to ensure the bank can operate within the risk appetite under a range of scenarios (i.e., planning, baseline and stressed).

SAS2

Any breach of the stated risk appetite observed in these analyses leads to management action. These actions may include, but are not limited to,

  • enforcement or reallocation of risk limits,
  • revisions to capital planning
  • or adjustments to current risk appetite levels.

What-If and Sensitivity Analysis

Faster, richer what-if analysis is perhaps the most powerful – and demanding – way to extend a bank’s stress testing utility. What-if analyses are often initiated from ad hoc requests made by management seeking timely insight to guide decisions. Narratives for these scenarios may be driven by recent news topics or unfolding economic events.

An anecdotal example illustrates the business value of this type of analysis. Two years ago, a chief risk officer at one of the largest banks in the United States was at a dinner event and heard concerns about Chinese real estate and a potential market crash. He quickly asked his stress testing team to assess the impact on the bank if such an event occurred. His team was able to report back within a week. Fortunately, the result was not bad – news that was a relief to the CRO.

The responsiveness exhibited by this CRO’s stress testing team is impressive. But speed alone is not enough. To really get value from what-if analysis, banks must also conduct it with a reasonable level of detail and sophistication. For this reason, banks must design their stress test infrastructure to balance comprehensiveness and performance. Otherwise, its value will be limited.

Sensitivity analysis usually supplements stress testing. It differs from other scenariobased analyses in that the scenarios typically lack a narrative around them. Instead, they are usually defined parametrically to answer questions about scenario, assumption and model deviations.

Sensitivity analysis can answer questions such as:

  • Which economic factors are the most significant for future portfolio performance?
  • What level of uncertainty results from incremental changes to inputs and assumptions?
  • What portfolio concentrations are most sensitive to model inputs?

For modeling purposes, sensitivity tests can be viewed as an expanded set of scenario analyses. Thus, if banks perform sensitivity tests, they must be able to scale their infrastructure to complete a large number of tests within a reasonable time frame and must be able to easily compare the results.

Emerging Risk Identification

Econometric-based stress testing of portfolio-level credit, market, interest rate and liquidity risks is now a relatively established practice. But measuring the impacts from other risks, such as reputation and strategic risk, is not trivial. Scenario-based analysis provides a viable solution, though it requires proper translation from the scenarios involving these risks into a scenario that can be modeled. This process often opens a rich dialogue across the institution, leading to a beneficial consideration of potential business impacts.

Reverse Stress Testing

To enhance the relevance of the scenarios applied in stress testing analyses, many regulators have required banks to conduct reverse stress tests. For reverse stress tests, institutions must determine the risk factors that have a high impact on their business and determine scenarios that result in the breaching thresholds of specific output metrics (e.g., total capital ratio).

There are multiple approaches to reverse stress testing. Skoglund and Chen proposed a method leveraging risk information measures to decompose the risk factor impact from simulations and apply the results for stress testing. Chen and Skoglund also explained how stress testing and simulation can leverage each other for risk analyses.

Assessing the Impacts of COVID-19

The worldwide spread of COVID-19 in 2020 has presented a sudden shock to the financial plans of lending institutions. Both the spread of the virus and the global response to it are highly dynamic. Bank leaders, seeking a timely understanding of the potential financial impacts, have increasingly turned to scenario analysis. But, to be meaningful, the process must:

  • Scale to an increasing array of input scenarios as the situation continues to develop.
  • Provide a controlled process to perform and summarize numerous iterations of analysis.
  • Provide understandable and explainable results in a timely fashion.
  • Provide process transparency and control for qualitative and quantitative assumptions.
  • Maintain detailed data to support ad hoc reporting and concentration analysis.

Banks able to conduct rapid ad hoc analysis can respond more confidently and provide a data-driven basis for the actions they take as the crisis unfolds.

Conclusion

Regulatory stress testing has become a primary tool for bank supervision, and financial institutions have dedicated significant time and resources to comply with their regional mandates. However, the benefits of scenario-based analysis reach beyond such rote compliance.

Leading banks are finding they can expand the utility of their stress test programs to

  • enhance their understanding of portfolio dynamics,
  • improve their planning processes
  • and better prepare for future crises.

Through increased automation, institutions can

  • explore a greater range of scenarios,
  • reduce processing time and effort,
  • and support the increased flexibility required for strategic scenario-based analysis.

Armed with these capabilities, institutions can improve their financial performance and successfully weather downturns by making better, data-driven decisions.

Click here to access SAS’ latest Whitepaper

Implementing combined audit assurance

ASSESS IMPACT & CREATE AN ASSURANCE MAP

The audit impact assessment and assurance map are interdependent—and the best possible starting point for your combined assurance journey. An impact assessment begins with a critical look at the current or “as is” state of your organization. As you review your current state, you build out your assurance map with your findings. You can’t really do one without the other. The map, then, will reveal any overlaps and gaps, and provide insight into the resources, time, and costs you might require during your implementation. Looking at an assurance map example will give you a better idea of what we’re talking about. The Institute of Chartered Accountants of England and Wales (ICAEW) has an excellent template.

Galv4

The ICAEW has also provided a guide to building a sound assurance map. The institute suggests you take the following steps:

  1. Identify your sponsor (the main user/senior staff member who will act as a champion).
  2. Determine your scope (identify elements that need assurance, like operational/ business processes, board-level risks, governance, and compliance).
  3. Assess the required amount of assurance for each element (understand what the required or desired amount of assurance is across aspects of the organization).
  4. Identify and list your assurance providers in each line of defense (e.g., audit committee or risk committee in the third line).
  5. Identify your assurance activities (compile and review relevant documentation, select and interview area leads, collate and assess assurance provider information).
  6. Reassess your scope (revisit and update your map scope, based on the information you have gathered/evaluated to date).
  7. Assess the quality of your assurance activities (look at breadth and depth of scope, assurance provider competence, how often activities are reviewed, and the strengths/quality of assurance delivered by each line of defense).
  8. Assess the aggregate actual amount of assurance for each element (the total amount of assurance needs to be assessed, collating all the assurance being provided by each line of defense).
  9. Identify the gaps and overlaps in assurance for each element (compare the actual amount of assurance with the desired amount to determine if there are gaps or overlaps).
  10. Determine your course of action (make recommendations for the actions to be taken/activities to be performed moving forward).

Just based on the steps above, you could understand how your desired state evolves by the time you reach step 10. Ideally, by this point, gaps and overlaps have been eliminated. But the steps we just reviewed don’t cover the frequency of each review and they don’t determine costs. So we’ve decided to add a few more steps to round it out:

  1. Assess the frequency of each assurance activity.
  2. Identify total cost for all the assurance activities in the current state.
  3. Identify the total cost for combined assurance (i.e., when gaps and overlaps have been addressed, and any consequent benefits or cost savings).

DEFINE THE RISKS OF IMPLEMENTATION

Implementing combined assurance is a project, and like any project, there’s a chance it can go sideways and fail, losing you both time and money. So, just like anything else in business, you need to take a risk-based approach. As part of this stage, you’ll want to clearly define the risks of implementing a combined assurance program, and add these risks, along with a mitigation plan and the expected benefits, to your tool kit. As long as the projected benefits of the project outweigh the residual risks and costs, the implementation program is worth pursuing. You’ll need to be able to demonstrate that a little further down the process.

DEFINE RESOURCES & DELIVERABLES

Whoever will own the project of implementing combined assurance will no doubt need dedicated resources in order to execute. So, who do we bring in? On first thought, the internal audit team looks best suited to drive the program forward. But, during the implementation phase, you’ll actually want a cross-functional team of people from internal control, risk, and IT, to work alongside internal audit. So, when you’re considering resourcing, think about each and every team this project touches. Now you know who’s going to do the work, you’ll want to define what they’re doing (key milestones) and when it will be delivered (time frame). And finally, define the actual benefits, as well as the tangible deliverables/outcomes of implementing combined assurance. (The table below provides some examples, but each organization will be unique.)

Galv1

RAISE AWARENESS & GET MANAGEMENT COMMITMENT

Congratulations! You’re now armed with a fancy color-coded impact assessment, and a full list of risks, resources, and deliverables. The next step is to clearly communicate and share the driving factors behind your combined assurance initiative. If you want them to support and champion your efforts, top management will need to be able to quickly take in and understand the rationale behind your desire for combined assurance. Critical output: You’ll want to create a presentation kit of sorts, including the assurance map, lists of risks, resources, and deliverables, a cost/benefit analysis, and any supporting research or frameworks (e.g., the King IV Report, FRC Corporate Governance Code, available industry analysis, and case studies). Chances are, you’ll be presenting this concept more than once, so if you can gather and organize everything in a single spot, that will save a lot of headaches down the track.

ASSIGN ACCOUNTABILITY

When we ask the question, “Who owns the implementation of combined assurance?”, we need to consider two main things:

  • Who would be most impacted if combined assurance were implemented?
  • Who would be senior enough to work across teams to actually get the job done?

It’s evident that a board/C-level executive should lead the project. This project will be spanning multiple departments and require buy-in from many people—so you need someone who can influence and convince. Therefore, we feel that the chief audit executive (CAE) and/or the chief revenue officer (CRO) should be accountable for implementing combined assurance. The CAE literally stands at the intersection of internal and external assurance. Where reliance is placed on the work of others, the CAE is still accountable and responsible for ensuring adequate support for conclusions and opinions reached by the internal audit activity. And the CRO is taking a more active interest in assurance maps as they become increasingly more risk-focused. The Institute of Internal Auditors (IIA), Standard 2050, also assigns accountability to the CAE, stating: “The chief audit executive should share information and coordinate activities with other internal and external assurance providers and consulting services to ensure proper coverage and minimize duplication of effort.” So, not only is the CAE at the intersection of assurance, they’re also directing traffic—exactly the combination we need to drive implementation.

Envisioning the solution

You’ve summarized the current/“as is” state in your assurance map. Now it’s time to move into a future state of mind and envision your desired state. What does your combined assurance solution look like? And, more critically, how will you create it? This stage involves more assessment work. Only now you’ll be digging into the maturity levels of your organization’s risk management and internal audit process, as well as the capabilities and maturity of your Three Lines of Defense. This is where you answer the questions, “What do I want?”, and “Is it even feasible?” Some make-or-break capability factors for implementing combined assurance include:

  1. Corporate risk culture Risk culture and risk appetite shape an organization’s decision-making, and that culture is reflected at every level. Organizations who are more risk-averse tend to be unwilling to make quick decisions without evidence and data. On the other hand, risk-tolerant organizations take more risks, make rapid decisions, and pivot quickly, often without performing due diligence. How will your risk culture shape your combined assurance program?
  2. Risk management awareness If employees don’t know—and don’t prioritize— how risk can and should be managed in your organization, your implementation program will fail. Assurance is very closely tied to risk, so it’s important to communicate constantly and make people aware that risk at every level must be adequately managed.
  3. Risk management processes We just stated that risk and assurance are tightly coupled, so it makes sense that the more mature your risk management processes are, the easier it will be to implement combined assurance. Mature risk management means you’ve got processes defined, documented, running, and refined. For the lucky few who have all of these things, you’re going to have a much easier time compared to those who don’t.
  4. Risk & controls taxonomy Without question, you will require a common risk and compliance language. We can’t have people making up names for tools, referring to processes in different ways, or worst of all, reporting on totally random KPIs. The result of combined assurance should be “one language, one voice, one view” of the risks and issues across the organization.
  5. System & process integrations An integrated system where there is one set of risks and one set of controls is key to delivering effective combined assurance. This includes: Risk registers across the organization, Controls across the organization Issues and audit findings, Reporting.
  6. Technology use Without dedicated software technology, it’s extremely difficult to provide a sustainable risk management system with sound processes, a single taxonomy, and integrated risks and controls. How technology is used in your organization will determine the sustainability of combined assurance. (If you already have a risk management and controls platform that has these integration capabilities, implementation will be easier.)
  7. Using assurance maps as monitoring tools Assurance maps aren’t just for envisioning end-states; they’re also critical monitoring tools that can feed data into your dashboard. They can inform your combined assurance dashboard, to help report on progress.
  8. Continuous improvement mechanisms A mature program will always have improvement mechanisms and feedback loops to incorporate user and stakeholder feedback. A lack of this feedback mechanism will impact the continued effectiveness of combined assurance.

We now assess the maturity of these factors (plus any others that you find relevant) and rank them on a scale of 1-4:

  • Level 1: Not achieved (0-15% of target).
  • Level 2: Partially achieved (15-50%).
  • Level 3: Largely achieved (50-85%).
  • Level 4: Achieved (85-100%).

This rating scale is based on the ISO/IEC 15504 that assigns a rating to the degree each objective (process capability) is achieved. An example of a combined assurance capability maturity assessment can be seen in Figure 2.

Galv2

GAP ANALYSIS

Once the desired levels for all of the factors are agreed on and endorsed by senior management, the next step is to undertake a gap analysis. The example in Figure 2 shows that the current overall maturity level is a 2 and the desired level is a 3 or 4 for each factor. The gap for each factor needs to be analyzed for the activities and resources required to bridge it. Then you can envision the solution and create a roadmap to bridge the gap(s).

SOLUTION VISION & ROADMAP

An example solution vision and roadmap could be:

  • We will use the same terminology and language for risk in all parts of the organization, and establish a single risk dictionary as a central repository.
  • All risks will be categorized according to severity and criticality and be mapped to assurance providers to ensure that no risk is assessed by more than one provider.
  • A rolling assurance plan will be prepared to ensure that risks are appropriately prioritized and reviewed at least once every two years.
  • An integrated, real-time report will be available on demand to show the status, frequency, and coverage of assurance activities.
  • The integrated report/assurance map will be shared with the board, audit committee, and risk committee regularly (e.g., quarterly or half-yearly).
  • To enable these capabilities, risk capture, storage, and reporting will be automated using an integrated software platform.

Figure 3 shows an example roadmap to achieve your desired maturity level.

Galv3

Click here to access Galvanize’s Risk Manangement White Paper

 

Fintech, regtech and the role of compliance in 2020

The ebb and flow of attitudes on the adoption and use of technology has evolving ramifications for financial services firms and their compliance functions, according to the findings of the Thomson Reuters Regulatory Intelligence’s fourth annual survey on fintech, regtech and the role of compliance. This year’s survey results represent the views and experiences of almost 400 compliance and risk practitioners worldwide.

During the lifetime of the report it has had nearly 2,000 responses and been downloaded nearly 10,000 times by firms, risk and compliance practitioners, regulators, consultancies, law firms and global systemically-important financial institutions (G-SIFIs). The report also highlights the shifting role of the regulator and concerns about best or better practice approaches to tackle the rise of cyber risk. The findings have become a trusted source of insight for firms, regulators and their advisers alike. They are intended to help regulated firms with planning, resourcing and direction, and to allow them to benchmark whether their resources, skills, strategy and expectations are in line with those of the wider industry. As with previous reports, regional and G-SIFI results are split out where they highlight any particular trend. One challenge for firms is the need to acquire the skill sets which are essential if they are to reap the expected benefits of technological solutions. Equally, regulators and policymakers need to have the appropriate up-todate skillsets to enable consistent oversight of the use of technology in financial services. Firms themselves, and G-SIFIs in particular, have made substantial investments in skills and the upgrading of legacy systems.

Key findings

  • The involvement of risk and compliance functions in their firm’s approach to fintech, regtech and insurtech continues to evolve. Some 65% of firms reported their risk and compliance function was either fully engaged and consulted or had some involvement (59% in prior year). In the G-SIFI population 69% reported at least some involvement with those reporting their compliance function as being fully engaged and consulted almost doubling from 13% in 2018, to 25% in 2019. There is an even more positive picture presented on increasing board involvement in the firm’s approach to fintech, regtech and insurtech. A total of 62% of firms reported their board being fully engaged and consulted or having some involvement, up from 54% in the prior year. For G-SIFIs 85% reported their board being fully engaged and consulted or having some involvement, up from 56% in the prior year. In particular, 37% of G-SIFIs reported their board was fully engaged with and consulted on the firm’s approach to fintech, regtech and insurtech, up from 13% in the prior year.
  • Opinion on technological innovation and digital disruption has fluctuated in the past couple of years. Overall, the level of positivity about fintech innovation and digital disruption has increased, after a slight dip in 2018. In 2019, 83% of firms have a positive view of fintech innovation (23% extremely positive, 60% mostly positive), compared with 74% in 2018 and 83% in 2017. In the G-SIFI population the positivity rises to 92%. There are regional variations, with the UK and Europe reporting a 97% positive view at one end going down to a 75% positive view in the United States.
  • There has been a similar ebb and flow of opinion about regtech innovation and digital disruption although at lower levels. A total of 77% reported either an extremely or mostly positive view, up from 71% in the prior year. For G-SIFIs 81% had a positive view, up from 76% in the prior year.
  • G-SIFIs have reported a significant investment in specialist skills for both risk and compliance functions and at board level. Some 21% of G-SIFIs reported they had invested in and/or appointed people with specialist skills to the board to accommodate developments in fintech, insurtech and regtech, up from 2% in the prior year. This means in turn 79% of G-SIFIs have not completed their work in this area, which is potentially disturbing. Similarly, 25% of G-SIFIs have invested in specialist skills for the risk and compliance functions, up from 9% in the prior year. In the wider population 10% reported investing in specialist skills at board level and 16% reported investing in specialist skills for the risk and compliance function. A quarter (26%) reported they have yet to invest in specialist skills for the risk and compliance function, but they know it is needed (32% for board-level specialist skills). Again, these figures suggest 75% of G-SIFIs have not fully upgraded their risk and compliance functions, rising to 84% in the wider population.
  • The greatest financial technology challenge firms expect to face in the next 12 months have changed in nature since the previous survey, with the top three challenges cited as keeping up with technological advancements; budgetary limitations, lack of investment and cost; and data security. In prior years, the biggest challenges related to the need to upgrade legacy systems and processes as well as budgetary limitations, the adequacy and availability of skilled resources together with the need for cyber resilience. In terms of the greatest benefits expected to be seen from financial technology in the next 12 months the top three are a strengthening of operational efficiency, improved services for customers and greater business opportunities.
  • G-SIFIs are leading the way on the implementation of regtech solutions. Some 14% of G-SIFIs have implemented a regtech solution, up from 9% in the prior year with 75% (52% in the prior year) reporting they have either fully or partially implemented a regtech solution to help manage compliance. In the wider population, 17% reported implementing a regtech solution, up from 8% in the prior year. The 2018 numbers overall showed a profound dip from 2017 when 29% of G-SIFIs and 30% of firms reported implementing a regtech solution, perhaps highlighting that early adoption of regtech solutions was less than smooth.
  • Where firms have not yet deployed fintech or regtech solutions various reasons were cited as to what was holding them back. Significantly, one third of firms cited lack of investment; a similar number of firms pointed to a lack of in-house skills and information security/data protection concerns. Some 14% of  firms and 12% of G-SIFIs reported they had taken a deliberate strategic decision not to deploy fintech or regtech solutions yet.
  • There continues to be substantial variation in the overall budget available for regtech solutions. A total of 38% of firms (31% in prior year) reported that the expected budget would grow in the coming year, however, 31% said they lack a budget for regtech (25% in the prior year). For G-SIFIs 48% expected the budget to grow (36% in prior year), with 12% reporting no budget for regtech solutions (6% in the prior year).

Focus : Challenges for firms

Technological challenges for firms come in all shapes and sizes. There is the potential, marketplace changing, challenge posed by the rise of bigtech. There is also the evolving approach of regulators and the need to invest in specialist skill sets. Lastly, there is the emerging need to keep up with technological advances themselves.

TR10

The challenges for firms have moved on. In the first three years of the report the biggest financial technology challenge facing firms was that of the need to upgrade legacy systems and processes. This year the top three challenges are expected to be the need to keep up with technology advancements; perceived budgetary limitations, lack of investment and cost, and then data security.

Focus : Cyber risk

Cyber risk and the need to be cyber-resilient is a major challenge for financial services firms which are targets for hackers. They must be prepared and be able to respond to any kind of cyber incident. Good customer outcomes will be under threat if cyber resilience fails.

One of the most prevalent forms of cyber attack is ransomware. There are different types of ransomware, all of which will seek to prevent a firm or an individual from using their IT systems and will ask for something (usually payment of a ransom) to be done before access will be restored. Even then, there is no guarantee that paying the fine or acceding to the ransomware attacker’s demands will restore full access to all IT systems, data or files. Many firms have found that critical files often containing client data have been encrypted as part of an attack and large amounts of money are demanded for restoration. Encryption is in this instance used as a weapon and it can be practically impossible to reverse-engineer the encryption or “crack” the files without the original encryption key – which cyber attackers deliberately withhold. What was previously viewed often as an IT problem has become a significant issue for risk and compliance functions. The regulatory stance is typified by the UK Financial Conduct Authority (FCA) which has said its goal is to “help firms become more resilient to cyber attacks, while ensuring that consumers are protected and market integrity is upheld”. Regulators do not expect firms to be impervious but do expect cyber risk management to become a core competency.

Good and better practice on defending against ransomware attacks Risk and compliance officers do not need to become technological experts overnight but must ensure cyber risks are effectively managed and reported on within their firm’s corporate governance framework. For some compliance officers, cyber risk may be well outside their comfort zone but there is evidence that simple steps implemented rigorously can go a long way towards protecting a firm and its customers. Any basic cyber-security hygiene aimed at protecting businesses from ransomware attacks should make full use of the wide range of resources available on cyber resilience, IT security and protecting against malware attacks. The UK National Cyber Security Centre has produced some practical guidance on how organizations can protect themselves in cyberspace, which it updates regularly. Indeed, the NCSC’s 10 steps to cyber security have now been adopted by most of the FTSE350.

TR11

Closing thoughts

The financial services industry has much to gain from the effective implementation of fintech, regtech and insurtech but practical reality is there are numerous challenges to overcome before the potential benefits can be realised. Investment continues to be needed in skill sets, systems upgrades and cyber resilience before firms can deliver technological innovation without endangering good customer outcomes.

An added complication is the business need to innovate while looking over one shoulder at the threat posed by bigtech. There are also concerns for solution providers. The last year has seen many technology start-ups going bust and far fewer new start-ups getting off the ground – an apparent parallel, at least on the surface, to the bubble that was around dotcom. Solutions need to be practical, providers need to be careful not to over promise and under deliver and above all developments should be aimed at genuine problems and not be solutions looking for a problem. There are nevertheless potentially substantive benefits to be gained from implementing fintech, regtech and insurtech solutions. For risk and compliance functions much of the benefit may come from the ability to automate rote processes with increasing accuracy and speed. Indeed, when 900 respondents to the 10th annual cost of compliance survey report were asked to look into their crystal balls and predict the biggest change for compliance in the next 10 years, the largest response was automation.

Technology and its failure or misuse is increasingly being linked to the personal liability and accountability of senior managers. Chief executives, board members and other senior individuals will be held accountable for failures in technology and should therefore ensure their skill set is up-to-date. Regulators and politicians alike have shown themselves to be increasingly intolerant of senior managers who fail to take the expected reasonable steps with regards to any lack of resilience in their firm’s technology.

This year’s findings suggest firms may find it beneficial to consider:

  • Is fintech (and regtech) properly considered as part of the firm’s strategy? It is important for regtech especially not to be forgotten about in strategic terms: a systemic failure arising from a regtech solution has great capacity to cause problems for the firm – the UK FCA’s actions on regulatory reporting, among other things, are an indicator of this.
  • Not all firms seem to have fully tackled the governance challenge fintech implies: greater specialist skills may be needed at board level and in risk and compliance functions.
  • Lack of in-house skills was given as a main reason for failing to develop fintech or regtech solutions. It is heartening that firms understand the need for those skills. As fintech/regtech becomes mainstream, however, firms may be pressed into developing such solutions. Is there a plan in place to plug the skills gap?
  • Only 22% of firms reported that they need more resources to evaluate, understand and deploy fintech/ regtech solutions. This suggests 78% of firms are unduly relaxed about the resources needed in the second line of defence to ensure fintech/regtech solutions are properly monitored. This may be a correct conclusion, but seems potentially bullish.

Click here to access Thomson Reuters’ Survey Results

Benchmarking digital risk factors facing financial service firms

Risk management is the foundation upon which financial institutions are built. Recognizing risk in all its forms—measuring it, managing it, mitigating it—are all critical to success. But has every firm achieved that goal? It doesn’t take indepth research beyond the myriad of breach headlines to answer that question.

But many important questions remain: What are key dimensions of the financial sector Internet risk surface? How does that surface compare to other sectors? Which specific industries within Financial Services appear to be managing that risk better than others? We take up these questions and more in this report.

  1. The financial sector boasts the lowest rate of high and critical security exposures among all sectors. This indicates they’re doing a good job managing risk overall.
  2. But not all types of financial service firms appear to be managing risk equally well. For example, the rate of severe findings in the smallest commercial banks is 4x higher than that of the largest banks.
  3. It’s not just small community banks struggling, however. Securities and Commodities firms show a disconcerting combination of having the largest deployment of high-value assets AND the highest rate of critical security exposures.
  4. Others appear to be exceeding the norm. Take credit card issuers: they typically have the largest Internet footprint but balance that by maintaining the lowest rate of security exposures.
  5. Many other challenges and risk factors exist. For instance, the industry average rate of severe security findings in critical cloud-based assets is 3.5x that of assets hosted on-premises.

Dimensions of the Financial Sector Risk Surface

As Digital Transformation ushers in a plethora of changes, critical areas of risk exposure are also changing and expanding. We view the risk surface as anywhere an organization’s ability to operate, reputation, assets, legal obligations, or regulatory compliance is at risk. The aspects of a firm’s risk exposure that are associated with or observable from the internet are considered its internet risk surface. In Figure 1, we compare five key dimensions of the internet risk surface across different industries and highlight where the financial sector ranks among them.

  • Hosts: Number of internet-facing assets associated with an organization.
  • Providers: Number of external service providers used across hosts.
  • Geography: Measure of the geographic distribution of a firm’s hosts.
  • Asset Value: Rating of the data sensitivity and business criticality of hosts based on multiple observed indicators. High value systems that include those that collect GDPR and CCPA regulated information.
  • Findings: Security-relevant issues that expose hosts to various threats, following the CVSS rating scale.

TR1

The values recorded in Figure 1 for these dimensions represent what’s “typical” (as measured by the mean or median) among organizations within each sector. There’s a huge amount of variation, meaning not all financial institutions operate more external hosts than all realtors, but what you see here is the general pattern. The blue highlights trace the ranking of Finance along each dimension.

Financial firms are undoubtedly aware of these tendencies and the need to protect those valuable assets. What’s more, that awareness appears to translate fairly effectively into action. Finance boasts the lowest rate of high and critical security exposures among all sectors. We also ran the numbers specific to high-value assets, and financial institutions show the lowest exposure rates there too. All of this aligns pretty well with expectations—financial firms keep a tight rein on their valuable Internet-exposed assets.

This control tendency becomes even more apparent when examining the distribution of hosts with severe findings in Figure 2. Blue dots mark the average exposure rate for the entire sector (and correspond to values in Figure 1), while the grey bars indicate the amount of variation among individual organizations within each sector. The fact that Finance exhibits the least variation shows that even rotten apples don’t fall as far from the Finance tree as they often do in other sectors. Perhaps a rising tide lifts all boats?

TR2

Security Exposures in Financial Cloud Deployments

We now know financial institutions do well minimizing security findings, but does that record stand equally strong across all infrastructure? Figure 3 answers that question by featuring four of the five key risk surface dimensions:

  • the proportion of hosts (square size),
  • asset value (columns),
  • hosting location (rows),
  • and the rate of severe security findings (color scale and value label).

This view facilitates a range of comparisons, including the relative proportion of assets hosted internally vs. in the cloud, how asset value distributes across hosting locales, and where high-severity issues accumulate.

TR3

From Figure 3, box sizes indicate that organizations in the financial sector host a majority of their Internet-facing systems on-premises, but do leverage the cloud to a greater degree for low-value assets. The bright red box makes it apparent that security exposures concentrate more acutely in high-value assets hosted in the cloud. Overall, the rate of severe findings in cloud-based assets is 3.5x that of on-prem. This suggests the angst many financial firms have over moving to the cloud does indeed have some merit. But when we examine the Finance sector relative to others in Figure 4 the intensity of exposures in critical cloud assets appears much less drastic.

In Figure 3, we can see that the largest number of hosts are on-prem and of medium value. But high-value assets in the cloud exhibit the highest rate of findings.

Given that cloud vs. on-prem exposure disparity, we feel the need to caution against jumping to conclusions. We could interpret these results to proclaim that the cloud isn’t ready for financial applications and should be avoided. Another interpretation could suggest that it’s more about organizational readiness for the cloud than the inherent insecurity of the cloud. Either way, it appears that many financial institutions migrating to the cloud are handling that paradigm shift better than others.

It must also be noted that not all cloud environments are the same. Our Cloud Risk Surface report discovered an average 12X difference between cloud providers with the highest and lowest exposure rates. We still believe this says more about the typical users and use cases of the various cloud platforms than any intrinsic security inequalities. But at the same time, we recommend evaluating cloud providers based on internal features as well as tools and guidance they make available to assist customers in securing their environments. Certain clouds are undoubtedly a better match for financial services use cases while others less so.

TR4

Risk Surface of Subsectors within Financial Services

Having compared Finance to other sectors at a high level, we now examine the risk surface of major subsectors of financial services according to the following NAICS designations:

  • Insurance Carriers: Institutions engaged in underwriting and selling annuities, insurance policies, and benefits.
  • Credit Intermediation: Includes banks, savings institutions, credit card issuers, loan brokers, and processors, etc.
  • Securities & Commodities: Investment banks, brokerages, securities exchanges, portfolio management, etc.
  • Central Banks: Monetary authorities that issue currency, manage national money supply and reserves, etc.
  • Funds & Trusts: Funds and programs that pool securities or other assets on behalf of shareholders or beneficiaries.

TR5

Figure 5 compares these Finance subsectors along the same dimensions used in Figure 1. At the top, we see that Insurance Carriers generally maintain a large Internet surface area (hosts, providers, countries), but a comparatively lower ranking for asset value and security findings. The Credit Intermediation subsector (the NAICS designation that includes banks, brokers, creditors, and processors) follows a similar pattern. This indicates that such organizations are, by and large, able to maintain some level of control over their expanding risk surface.

A leading percentage of high-value assets and a leading percentage of highly critical security findings for the Securities and Commodities subsector is a disconcerting combination. It suggests either unusually high risk tolerance or ineffective risk management (or both), leaving those valuable assets overexposed. The Funds and Trusts subsector exhibits a more riskaverse approach to minimizing exposures across its relatively small digital footprint of valuable assets.

Risk Surface across Banking Institutions

Given that the financial sector is so broad, we thought a closer examination of the risk surface particular to banking institutions was in order. Banks have long concerned themselves with risk. Well before the rise of the Internet or mobile technologies, banks made their profits by determining how to gauge the risk of potential borrowers or loans, plotting the risk and reward of offering various deposit and investment products, or entering different markets, allowing access through several delivery channels. It could be said that the successful management and measurement of risk throughout an organization is perhaps the key factor that has always determined the relative success or failure of any bank.

As a highly-regulated industry in most countries, banking institutions must also consider risk from more than a business or operational perspective. They must take into account the compliance requirements to limit risk in various areas, and ensure that they are properly securing their systems and services in a way that meets regulatory standards. Such pressures undoubtedly affect the risk surface and Figure 6 hints at those effects on different types of banking institutions.

Credit card issuers earn the honored distinction of having the largest average number of Internet-facing hosts (by far) while achieving the lowest prevalence of severe security findings. Credit unions flip this trend with the fewest hosts and most prevalent findings. This likely reflects the perennial struggle of credit unions to get the most bang from their buck.

Traditionally well-resourced commercial banks leverage the most third party providers and have a presence in more countries, all with a better-than-average exposure rate. Our previous research revealed that commercial banks were among the top two generators and receivers of multi-party cyber incidents, possibly due to the size and spread of their risk surface.

TR6

Two Things to Consider

  1. In this interconnected world, third-party and fourth-party risk is your risk. If you are a financial institution, particularly a commercial bank, take a moment to congratulate yourself on managing risk well – but only for a moment. Why? Because every enterprise is critically dependent on a wide array of vendors and partners that span a broad spectrum of industries. Their risk is your risk. The work of your third-party risk team is critically important in holding your vendors accountable to managing your risk interests well.
  2. Managing risk—whether internal or third-party—requires focus. There are simply too many things to do, giving rise to the endless “hamster wheel of risk management.” A better approach starts with obtaining an accurate picture of your risk surface and the critical exposures across it. This includes third-party relationships, and now fourth-party risk, which bank regulators are now requiring. Do you have the resources to sufficiently manage this? Do you know your risk surface?

Click here to access Riskrecon Cyentia’s Study

Uncertainty Visualization

Uncertainty is inherent to most data and can enter the analysis pipeline during the measurement, modeling, and forecasting phases. Effectively communicating uncertainty is necessary for establishing scientific transparency. Further, people commonly assume that there is uncertainty in data analysis, and they need to know the nature of the uncertainty to make informed decisions.

However, understanding even the most conventional communications of uncertainty is highly challenging for novices and experts alike, which is due in part to the abstract nature of probability and ineffective communication techniques. Reasoning with uncertainty is unilaterally difficult, but researchers are revealing how some types of visualizations can improve decision-making in a variety of diverse contexts,

  • from hazard forecasting,
  • to healthcare communication,
  • to everyday decisions about transit.

Scholars have distinguished different types of uncertainty, including

  • aleatoric (irreducible randomness inherent in a process),
  • epistemic (uncertainty from a lack of knowledge that could theoretically be reduced given more information),
  • and ontological uncertainty (uncertainty about how accurately the modeling describes reality, which can only be described subjectively).

The term risk is also used in some decision-making fields to refer to quantified forms of aleatoric and epistemic uncertainty, whereas uncertainty is reserved for potential error or bias that remains unquantified. Here we use the term uncertainty to refer to quantified uncertainty that can be visualized, most commonly a probability distribution. This article begins with a brief overview of the common uncertainty visualization techniques and then elaborates on the cognitive theories that describe how the approaches influence judgments. The goal is to provide readers with the necessary theoretical infrastructure to critically evaluate the various visualization techniques in the context of their own audience and design constraints. Importantly, there is no one-size-fits-all uncertainty visualization approach guaranteed to improve decisions in all domains, nor even guarantees that presenting uncertainty to readers will necessarily improve judgments or trust. Therefore, visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process.

Uncertainty Visualization Design Space

There are two broad categories of uncertainty visualization techniques. The first are graphical annotations that can be used to show properties of a distribution, such as the mean, confidence/credible intervals, and distributional moments.

Numerous visualization techniques use the composition of marks (i.e., geometric primitives, such as dots, lines, and icons) to display uncertainty directly, as in error bars depicting confidence or credible intervals. Other approaches use marks to display uncertainty implicitly as an inherent property of the visualization. For example, hypothetical outcome plots (HOPs) are random draws from a distribution that are presented in an animated sequence, allowing viewers to form an intuitive impression of the uncertainty as they watch.

The second category of techniques focuses on mapping probability or confidence to a visual encoding channel. Visual encoding channels define the appearance of marks using controls such as color, position, and transparency. Techniques that use encoding channels have the added benefit of adjusting a mark that is already in use, such as making a mark more transparent if the uncertainty is high. Marks and encodings that both communicate uncertainty can be combined to create hybrid approaches, such as in contour box plots and probability density and interval plots.

More expressive visualizations provide a fuller picture of the data by depicting more properties, such as the nature of the distribution and outliers, which can be lost with intervals. Other work proposes that showing distributional information in a frequency format (e.g., 1 out of 10 rather than 10%) more naturally matches how people think about uncertainty and can improve performance.

Visualizations that represent frequencies tend to be highly effective communication tools, particularly for individuals with low numeracy (e.g., inability to work with numbers), and can help people overcome various decision-making biases.

Researchers have dedicated a significant amount of work to examining which visual encodings are most appropriate for communicating uncertainty, notably in geographic information systems and cartography. One goal of these approaches is to evoke a sensation of uncertainty, for example, using fuzziness, fogginess, or blur.

Other work that examines uncertainty encodings also seeks to make looking-up values more difficult when the uncertainty is high, such as value-suppressing color pallets.

Given that there is no one-size-fits-all technique, in the following sections, we detail the emerging cognitive theories that describe how and why each visualization technique functions.

VU1

Uncertainty Visualization Theories

The empirical evaluation of uncertainty visualizations is challenging. Many user experience goals (e.g., memorability, engagement, and enjoyment) and performance metrics (e.g., speed, accuracy, and cognitive load) can be considered when evaluating uncertainty visualizations. Beyond identifying the metrics of evaluation, even the most simple tasks have countless configurations. As a result, it is hard for any single study to sufficiently test the effects of a visualization to ensure that it is appropriate to use in all cases. Visualization guidelines based on a single or small set of studies are potentially incomplete. Theories can help bridge the gap between visualizations studies by identifying and synthesizing converging evidence, with the goal of helping scientists make predictions about how a visualization will be used. Understanding foundational theoretical frameworks will empower designers to think critically about the design constraints in their work and generate optimal solutions for their unique applications. The theories detailed in the next sections are only those that have mounting support from numerous evidence-based studies in various contexts. As an overview, The table provides a summary of the dominant theories in uncertainty visualization, along with proposed visualization techniques.

UV2

General Discussion

There are no one-size-fits-all uncertainty visualization approaches, which is why visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process. This article overviews many of the common uncertainty visualization techniques and the cognitive theory that describes how and why they function, to help designers think critically about their design choices. We focused on the uncertainty visualization methods and cognitive theories that have received the most support from converging measures (e.g., the practice of testing hypotheses in multiple ways), but there are many approaches not covered in this article that will likely prove to be exceptional visualization techniques in the future.

There is no single visualization technique we endorse, but there are some that should be critically considered before employing them. Intervals, such as error bars and the Cone of Uncertainty, can be particularly challenging for viewers. If a designer needs to show an interval, we also recommend displaying information that is more representative, such as a scatterplot, violin plot, gradient plot, ensemble plot, quantile dotplot, or HOP. Just showing an interval alone could lead people to conceptualize the data as categorical. As alluded to in the prior paragraph, combining various uncertainty visualization approaches may be a way to overcome issues with one technique or get the best of both worlds. For example, each animated draw in a hypothetical outcome plot could leave a trace that slowly builds into a static display such as a gradient plot, or animated draws could be used to help explain the creation of a static technique such as a density plot, error bar, or quantile dotplot. Media outlets such as the New York Times have presented animated dots in a simulation to show inequalities in wealth distribution due to race. More research is needed to understand if and how various uncertainty visualization techniques function together. It is possible that combining techniques is useful in some cases, but new and undocumented issues may arise when approaches are combined.

In closing, we stress the importance of empirically testing each uncertainty visualization approach. As noted in numerous papers, the way that people reason with uncertainty is non-intuitive, which can be exacerbated when uncertainty information is communicated visually. Evaluating uncertainty visualizations can also be challenging, but it is necessary to ensure that people correctly interpret a display. A recent survey of uncertainty visualization evaluations offers practical guidance on how to test uncertainty visualization techniques.

Click her to access the entire article in Handbook of Computational Statistics and Data Science

The exponential digital social world

The exponential digital social world

Tech-savvy start-ups with natively digital business models regard this point in time as the best time in the history of the world to invent something. The world is buzzing with technology-driven opportunities leveraging the solid platform provided over the past 30 years, birthed from

  • the Internet,
  • then mobility,
  • social
  • and now the massive scale of cloud computing and the Internet of Things (IoT).

For the start-up community, this is a

  • platform for invention,
  • coupled with lowered / disrupted barriers,
  • access to venture capital,
  • better risk / benefit ratios
  • and higher returns through organisational agility.

Kevin Kelly, co-founder of Wired magazine believes we are poised to create truly great things and that what’s coming is exponentially different, beyond what we envisage today – ‘Today truly is a wide open frontier. We are all becoming. It is the best time ever in human history to begin’ (June 2016). Throughout history, there have been major economic and societal shifts and the revolutionary nature of these is only apparent retrospectively – at the time the changes were experienced as linear and evolutionary. But now is different. Information access is globalised and is seen as a democratic right for first world citizens and a human right for the less advantaged.

The genesis was the Internet and the scale is now exponential because cloud-based platforms embed connections between data, people and things into the very fabric of business and daily life. Economies are information and services-based and knowledge is a valued currency. This plays out at a global, regional, community and household level. Pro-active leaders of governments, businesses and communities addressing these trends stress the need for innovation and transformative change (vs incremental) to shape future economies and societies across the next few years. In a far reaching example of transformative vision and action, Japan is undertaking ‘Society 5.0’, a full national transformation strategy including policy, national digitisation projects and deep cultural changes. Society 5.0 sits atop a model of five waves of societal evolution to a ‘super smart society’. The ultimate state (5.0) is achieved through applying technological advancements to enrich the opportunities, knowledge and quality of life for people of all ages and abilities.

DD1

The Society 5.0 collaboration goes further than the digitisation of individual businesses and the economy, it includes all levels of the Japanese society, and the transformation of society itself. Society 5.0 is a framework to tackle several macro challenges that are amplified in Japan, such as an ageing population – today, 26.3% of the Japanese population is over 65, for the rest of the world, 20% of people will be over 60 by 2020. Japan is responding through the digitisation of healthcare systems and solutions. The increased mobility and flexibility of work to keep people engaged in meaningful employment, and the digitisation of social infrastructure across communities and into homes. This journey is paved with important technology-enabled advances, such as

  • IoT,
  • robotics,
  • artificial intelligence,
  • virtual and augmented reality,
  • big data analytics
  • and the integration of cyber and physical systems.

Japan’s transformation approach is about more than embracing digital, it navigates the perfect storm of technology change and profound changes in culture, society and business models. Globally, we are all facing four convergent forces that are shaping the fabric of 21st century life.

  • It’s the digital social world – engaging meaningfully with people matters, not merely transacting
  • Generational tipping point – millennials now have the numbers as consumers and workers, their value systems and ways of doing and being are profoundly different
  • Business models – your value chain is no longer linear, you are becoming either an ecosystem platform or a player / supplier into that ecosystem
  • Digital is ubiquitous – like particles in the atmosphere, digital is all around us, connecting people, data and things – it’s the essence of 21st century endeavours

How do leaders of our iconic, successful industrial era businesses view this landscape? Leaders across organisations, governments and communities are alert to the opportunities and threats from an always on economy. Not all leaders are confident they have a cohesive strategy and the right resources to execute a transformative plan for success in this new economy of knowledge, digital systems and the associated intangible assets – the digital social era. RocketSpace, a global ecosystem providing a network of campuses for start-up acceleration, estimate that 10 years from now, in 2027, 75% of today’s S&P 500 will be replaced by digital start-ups (RocketSpace Disruption Brief, March 2017). Even accounting for some potential skew in this estimate, we are in the midst of unprecedented change.

What is change about?

What are the strategic assets and capabilities that an organisation needs to have when bridging from the analogue to the digital world? Key to succeeding in this is taking the culture and business models behind successful start-ups and imbuing them into the mature enterprise. Organisations need to employ outside-in, stakeholder-centric design-thinking and adopt leveraged business models that create

  • scaled resources,
  • agility,
  • diversity of ideas

and headspace to

  • explore,
  • experiment,
  • fail and try again.

The need to protect existing assets and sources of value creation remains important. However, what drives value is changing, so a revaluation of portfolios is needed against a new balance sheet, the digital social balance sheet.

The Dimension Data Digital Social Balance Sheet evolved from analysing transformational activities with our global clients from the S&P500, the government sector, education and public health sectors and not-for-profits. We also learnt from collaborations with tech start-ups and our parent company, Nippon Telegraph and Telephone Group’s (NTT) R&D investment activities, where they create collaborative ecosystems referred to as B2B2X. The balance sheet represents the seven top level strategic capabilities driving business value creation in the digital social era. This holds across all industries, though it may be expressed differently and have different relative emphasis for various sectors – for example, stakeholders may include employees, partners, e-lance collaborators, customers, patients, shareholders or a congregation.

DD2

Across each capability we have defined five levels of maturity and this extends the balance sheet into the Dimension Data Digital Enterprise Capability Maturity Model. This is an holistic, globally standardised framework. From this innovative tool, organisations can

  • assess themselves today,
  • specify their target state,
  • conduct competitive benchmarking,
  • and map out a clear pathway of transitions for their business and stakeholders.

The framework can also be applied to construct your digital balance sheet reporting – values and measures can be monitored against organisational objectives.

Where does your organisation sit? Thinking about your best and worst experiences with a business or government organisation this year, what is revealed about their capabilities? Across each of the pillars of this model, technology is a foundation and an enabler of progressive maturity. For example, effective data architecture and data management platforming underpins the information value capability of responsiveness. A meaningful capability will be enabled by the virtual integration of hybrid data sources (internal systems, external systems, machines, sensors, social) for enhanced perception, discovery, insight and action by both knowledge workers and AI agents. Uber is a leading innovator in this, and is also applying deep learning, to predict demand and direct supply, not just in time, but just before time. In this, they are exploring beyond today’s proven and mainstream capabilities to generate unique business value.

Below is a high level assessment of three leading digitals at this point in their business evolution – Uber, Alibaba and the Estonian government. We infer their capabilities from our research of their organisational journeys and milestones, using published material such as articles and case studies, as well as our personal experiences engaging with their platforms. Note that each of these businesses’ capabilities are roughly in alignment across the seven pillars – this is key to sustainable value creation. For example, an updated online presence aimed at improving user experience delivers limited value if not integrated in real time across all channels, with information leveraged to learn and deepen engagement and processes designed around user context, able to adapt to fulfil the point in time need.

DD3

Innovation horizons

In the model below, key technology trends are shown. We have set out a view of their progression to exponential breakthrough (x axis) and the points at which these technologies will reach the peak of the adoption curve, flipping from early to late adopters (y axis). Relating this to the Digital Enterprise Capability Maturity Model, level 1 and 2 capabilities derive from what are now mature foundations (past). Level 3 aligns with what is different and has already achieved the exponential breakthrough point. Progressing to level 4 requires a preparedness to innovate and experiment with what is different and beyond. Level 5 entails an appetite to be a first mover, experimenting with technologies that will not be commercial for five to ten years, but potentially provide significant first mover advantage. This is where innovators such as Elon Musk’s horizons are set with Tesla and SpaceX.

An example of all of this coming together at level 3 of digital capability maturity and the different horizon – involving cloud, mobility, big data, analytics, IoT and cybersecurity – to enable a business to transform, is Amoury Sport Organisation (A.S.O.) and their running of the Tour de France. The Tour was conceived in 1903 as an event to promote and sell A.S.O.’s publications and is today the most watched annual sporting event in the world. Spectators, athletes and coaches are hungry for details and insights into the race and the athletes. Starting from the 2015 Tour, A.S.O. has leapt forward as a digital business. Data collected from sensors connected to the cyclist’s bike is aggregated on a secure, cloud-based, big data platform, analysed in real time and turned into entertaining insights and valuable performance statistics for followers and stakeholders of the Tour. This has opened up new avenues of monetisation for A.S.O. Dimension Data is the technology services partner enabling this IoT-based business platform.

DD4

If your organisation is not yet on the technology transformation path, consider starting now. For business to prosper from the digital economy, you must be platformed to enable success – ready and capable to seamlessly connect humans, machines and data and to assure secure ecosystem flows. The settings of our homes, cars, schools and learning institutions, health and fitness establishments, offices, cities, retail outlets, factories, defence forces, emergency services, logistics providers and other services are all becoming forever different in this digital atmosphere.

Where is your innovation horizon set? The majority of our co-innovation agendas with our clients are focused on the beyond horizons. In relation to this, we see four pairs of interlinked technologies being most impactful

  • artificial intelligence and robotics;
  • virtual/ augmented reality and the human machine interface;
  • nano-technology and 3D/4D printing,
  • and cybersecurity and the blockchain.

Artificial intelligence and robotics

Artificial intelligence (AI) is both a science and set of technologies inspired by the way humans sense, perceive, learn, reason, and act.

We are rapidly consuming AI and embedding it into our daily living, taking it for granted. Think about how we rely upon GPS and location services, use Google for knowledge, expect Facebook to identify and tag faces, ask Amazon to recommend a good read and Spotify to generate a personalised music list, not so long ago, these technologies were awe-inspiring.

Now, and into the next 15 years, there is an AI revolution underway, a constellation of different technologies coming together to propel AI forward as a central force in society. Our relationships with machines will become more nuanced and personalised. There’s a lot to contemplate here. We really are at a juncture where discussion is needed at all levels about the ways that we will and won’t deploy AI to promote democracy and prosperity and equitably share the wealth created from it.

The areas in which this will have the fastest impact are transportation, traditional employment and workplaces, the home, healthcare, education, public safety and security and entertainment. Let’s look at examples from some of these settings:

Transportation – Autonomous vehicles encapsulate IoT, all forms of machine learning, computer vision and also robotics. This will soon break through the exponential point, once the physical hardware systems are robust enough.

Healthcare – there is significant potential for use of AI in pure and applied research and healthcare service delivery, as well as aged and disability related services. The collection of data from clinical equipment e.g. MRI scanners and surgical robots, clinical electronic health records, facility-based room sensors, personal monitoring devices, and mobile apps is allowing for more complete digital health records to be compiled. Analysis of these records will evolve clinical understandings. For example, NTT Data provides a Unified Clinical Archive Service for radiologists, providing machine learning interpretation of MRI brain imagery. The service provides digital translations of MRI brain scans and contains complete data sets of normal brain functions (gathered from John Hopkins University in the US). Radiologists are able to quantitatively evaluate their patient results with the normal population to improve diagnostics. Each new dataset adds to the ecosystem of knowledge.

Education – AI promises to enhance education at all levels, particularly in providing personalisation at scale for all learners. Interactive machine tutors are now being matched to students. Learning analytics can detect how a student is feeling, how they will perform and what the best likely interventions to improve learning outcomes are. Online learning has also enabled great teachers to boost their class numbers to worldwide audiences, while at the same time, student’s individual learning needs can be augmented through analysis of their response to the global mentor. Postgraduate and professional learning is set to become more modular and flexible, with AI used to assess current skills and work related projects and match learning modules of most immediate career value – an assemble your own degree approach. Virtual reality along with AI, is also changing learning content and pathways to mastery, and so will be highly impactful. AI will never replace good teaching, and so the meaningful integration of AI with face-to-face teaching will be key.

Public safety and securityCybersecurity is a key area for applied AI. Machine learning from AI against the datasets from ubiquitously placed cameras and drones for surveillance is a key area. In areas of tax, financial services, insurance and international policing, algorithms are improving the conduct of fraud investigations. A significant driver for advances in deep learning, particularly in video and audio processing has come off the back of anti-terrorist analytics. All of these things are now coming together in emergency response planning and orchestration and in the emerging field of predictive policing.

Virtual reality/augmented reality and the human machine interface

The lines between the physical and digital worlds are merging, along the ‘virtuality’ continuum of augmented and virtual reality. Augmented reality (AR) technologies overlay digital information on the ‘real world’, the digital information is delivered via a mechanism, such as a heads-up display, smart glass wall or wrist display. Virtual reality (VR) immerses a person in an artificial environment where they interact with data, their visual senses (and others) controlled by the VR system. Augmented virtuality blends AR and VR. As virtuality becomes part of our daily lives, the way we will interact with each other, learn, work, and transact are being re-shaped.

At the 2017 NTT R&D Fair in Tokyo, the use of VR in sports coaching and the spectator experience was showcased, with participants able to experience playing against elite tennis and baseball players and riding in the Tour de France. A VR spectator experience also enabled the direct experience the rider’s view and the sensation of the rider’s heart rate and fatigue levels. These applications of VR and AI are being rapidly incorporated into sports analytics and coaching.

Other enterprise VR use cases include

  • teaching peacekeeping skills to troops in conflict zones,
  • the creation of travel adventures,
  • immersion in snowy climate terrain to reduce pain for burn victims,
  • teaching autistic teenagers to drive,
  • and 3D visualisations of organs prior to conducting surgery.

It isn’t hard to imagine the impact on educational and therapeutic services, government service delivery, a shopping experience, on social and cultural immersion for remote communities and on future business process design and product engineering.

Your transformation journey

Every business is becoming a digital business. Some businesses are being caught off guard by the pace and nature of change. They are finding themselves reactive, pulled into the digital social world by the forces of disruption and the new rules of engagement set by clients, consumers, partners, workers and competitors. Getting on the front foot is important in order to control your destiny and assure future success. The disruptive forces upon us present opportunities to create a new future and value for your organisation and stakeholders. There are also risks, but the risk management approach of doing nothing is not viable in these times.

Perhaps your boardroom and executive discussions need to step back from thinking about the evolution of the current business and think in an unconstrained ‘the art of possible’ manner as to the impact of the global digital disruption and sources of value creation into the future. What are the opportunities, threats and risks that these provide? What is in the best interests of the shareholders? How will you retain and improve your sector competitiveness and use digital to diversify?

Is a new industry play now possible? Is your transformed digital business creating the ecosystem (acting as a platform business) or operating within another? How will it drive the business outcomes and value you expect and some that you haven’t envisaged at this point?

The digital balance sheet and seven pillars of digital enterprise capability could be used as the paving blocks for your pathway from analogue to digital. The framework can also guide and measure your progressive journey.

DD5

Our experiences with our clients globally show us that the transformation journey is most effective when executed across three horizons of change. Effective three step horizon planning follows a pattern for course charting, with a general flow of:

  1. Establish – laying out the digital fabric to create the core building blocks for the business and executing the must do/no regret changes that will uplift and even out capability maturity to a minimum of level 2.
  2. Extend – creating an agile, cross-functional and collaborative capability across the business and executing a range of innovation experiments that create options, in parallel with the key transformative moves.
  3. Enhance – embedding the digital social balance sheet into ‘business as usual’, and particularly imbuing innovation to continuously monitor, renew and grow the organisation’s assets.

In this, there are complexities and nuances of the change, including:

  • Re-balancing of the risk vs opportunity appetite from the board
  • Acceptable ROI models
  • The ability of the organisation to absorb change
  • Dependencies across and within the balance sheet pillars
  • Maintaining transitional balance across the pillars
  • Managing finite resources – achieving operational cost savings to enable the innovation investment required to achieve the target state

The horizon plans also need to have flex – so that pace and fidelity can be dialled up or down to respond to ongoing disruption and the dynamic operational context of your organisation.

Don’t turn away from analogue wisdom, this is an advantage. Born-digital enterprises don’t have established physical channels and presence, have not experienced economic cycles and lack longitudinal wisdom. By valuing analogue experience and also embracing the essence of outside-in thinking and the new digital social business models, the executive can confidently execute.

A key learning is that the journey is also the destination – by

  • mobilising cross functional teams,
  • drawing on diverse skills and perspectives,

empowered to act using quality information that is meaningful to them – this uplifts your organisational capabilities and in itself will become one of your most valuable assets.

Click here to access Dimension Data’s detailed study