Actualités

Integrating Finance, Risk and Regulatory Reporting (FRR) through Comprehensive Data Management

Data travels faster than ever, anywhere and all the time. Yet as fast as it moves, it has barely been able to keep up with the expanding agendas of financial supervisors. You might not know it to look at them, but the authorities in Basel, Washington, London, Singapore and other financial and political centers are pretty swift themselves when it comes to devising new requirements for compiling and reporting data. They seem to want nothing less than a renaissance in the way institutions organize and manage their finance, risk and regulatory reporting activities.

The institutions themselves might want the same thing. Some of the business strategies and tactics that made good money for banks before the global financial crisis have become unsustainable and cut into their profitability. More stringent regulator frameworks imposed since the crisis require the implementation of complex, data-intensive stress testing procedures and forecasting models that call for unceasing monitoring and updating. The days of static reports capturing a moment in a firm’s life are gone. One of the most challenging data management burdens is rooted in duplication. The evolution of regulations has left banks with various bespoke databases across five core functions:

  • credit,
  • treasury,
  • profitability analytics,
  • financial reporting
  • and regulatory reporting,

with the same data inevitably appearing and processed in multiple places. This hodgepodge of bespoke marts simultaneously leads to both the duplication of data and processes, and the risk of inconsistencies – which tend to rear their head at inopportune moments (i.e. when consistent data needs to be presented to regulators). For example,

  • credit extracts core loan, customer and credit data;
  • treasury pulls core cash flow data from all instruments;
  • profitability departments pull the same instrument data as credit and treasury and add ledger information for allocations;
  • financial reporting pulls ledgers and some subledgers for reporting;
  • and regulatory reporting pulls the same data yet again to submit reports to regulators per prescribed templates.

The ever-growing list of considerations has compelled firms to revise, continually and on the fly, not just how they manage their data but how they manage their people and basic organizational structures. An effort to integrate activities and foster transparency – in particular through greater cooperation among risk and finance – has emerged across financial services. This often has been in response to demands from regulators, but some of the more enlightened leaders in the industry see it as the most sensible way to comply with supervisory mandates and respond to commercial exigencies, as well. Their ability to do that has been constrained by the variety, frequency and sheer quantity of information sought by regulators, boards and senior executives. But that is beginning to change as a result of new technological capabilities and, at least as important, new management strategies. This is where the convergence of Finance, Risk and Regulatory Reporting (FRR) comes in. The idea behind the FRR theme is that sound regulatory compliance and sound business analytics are manifestations of the same set of processes. Satisfying the demands of supervisory authorities and maximizing profitability and competitiveness in the marketplace involve similar types of analysis, modeling and forecasting. Each is best achieved, therefore, through a comprehensive, collaborative organizational structure that places the key functions of finance, risk and regulatory reporting at its heart.

The glue that binds this entity together and enables it to function as efficiently and cost effectively as possible – financially and in the demands placed on staff – is a similarly comprehensive and unified FRR data management. The right architecture will permit data to be drawn upon from all relevant sources across an organization, including disparate legacy hardware and software accumulated over the years in silos erected for different activities ad geographies. Such an approach will reconcile and integrate this data and present it in a common, consistent, transparent fashion, permitting it to be deployed in the most efficient way within each department and for every analytical and reporting need, internal and external.

The immense demands for data, and for a solution to manage it effectively, have served as a catalyst for a revolutionary development in data management: Regulatory Technology, or RegTech. The definition is somewhat flexible and tends to vary with the motivations of whoever is doing the defining, but RegTech basically is the application of cutting-edge hardware, software, design techniques and services to the idiosyncratic challenges related to financial reporting and compliance. The myriad advances that fall under the RegTech rubric, such as centralized FRR or RegTech data management and analysis, data mapping and data visualization, are helping financial institutions to get out in front of the stringent reporting requirements at last and accomplish their efforts to integrate finance, risk and regulatory reporting duties more fully, easily and creatively.

A note of caution though: While new technologies and new thinking about how to employ them will present opportunities to eliminate weaknesses that are likely to have crept into the current architecture, ferreting out those shortcomings may be tricky because some of them will be so ingrained and pervasive as to be barely recognizable. But it will have to be done to make the most of the systems intended to improve or replace existing ones.

Just what a solution should encompass to enable firms to meet their data management objectives depends on the

  • specifics of its business, including its size and product lines,
  • the jurisdictions in which it operates,
  • its IT budget
  • and the tech it has in place already.

But it should accomplish three main goals:

  1. Improving data lineage by establishing a trail for each piece of information at any stage of processing
  2. Providing a user-friendly view of the different processing step to foster transparency
  3. Working together seamlessly with legacy systems so that implementation takes less time and money and imposes less of a burden on employees.

The two great trends in financial supervision – the rapid rise in data management and reporting requirements, and the demands for greater organizational integration – can be attributed to a single culprit: the lingering silo structure. Fragmentation continues to be supported by such factors as a failure to integrate the systems of component businesses after a merger and the tendency of some firms to find it more sensible, even if it may be more costly and less efficient in the long run, to install new hardware and software whenever a new set of rules comes along. That makes regulators – the people pressing institutions to break down silos in the first place – inadvertently responsible for erecting new barriers.

This bunker mentality – an entrenched system of entrenchment – made it impossible to recognize the massive buildup of credit difficulties that resulted in the global crisis. It took a series of interrelated events to spark the wave of losses and insolvencies that all but brought down the financial system. Each of them might have appeared benign or perhaps ominous but containable when taken individually, and so the occupants of each silo, who could only see a limited number of the warning signs, were oblivious to the extent of the danger. More than a decade has passed since the crisis began, and many new supervisory regimens have been introduced in its aftermath. Yet bankers, regulators and lawmakers still feel the need, with justification, to press institutions to implement greater organizational integration to try to forestall the next meltdown. That shows how deeply embedded the silo system is in the industry.

Data requirements for the development that, knock on wood, will limit the damage from the next crisis – determining what will happen, rather than identifying and explaining what has already happened – are enormous. The same goes for running an institution in a more integrated way. It’s not just more data that’s needed, but more kinds of data and more reliable data. A holistic, coordinated organizational structure, moreover, demands that data be analyzed at a higher level to reconcile the massive quantities and types of information produced within each department. And institutions must do more than compile and sort through all that data. They have to report it to authorities – often quarterly or monthly, sometimes daily and always when something is flagged that could become a problem. Indeed, some data needs to be reported in real time. That is a nearly impossible task for a firm still dominated by silos and highlights the need for genuinely new design and implementation methods that facilitate the seamless integration of finance, risk and regulatory reporting functions. Among the more data-intensive regulatory frameworks introduced or enhanced in recent years are:

  • IFRS 9 Financial Instruments and Current Expected Credit Loss. The respective protocols of the International Accounting Standards Board and Financial Accounting Standards Board may provide the best examples of the forwardthinking approach – and rigorous reporting, data management and compliance procedures – being demanded. The standards call for firms to forecast credit impairments to assets on their books in near real time. The incurred-loss model being replaced merely had banks present bad news after the fact. The number of variables required to make useful forecasts, plus the need for perpetually running estimates that hardly allow a chance to take a breath, make the standards some of the most data-heavy exercises of all.
  • Stress tests here, there and everywhere. Whether for the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) for banks operating in the United States, the Firm Data Submission Framework (FDSF) in Britain or Asset Quality Reviews, the version conducted by the European Banking Authority (EBA) for institutions in the euro zone, stress testing has become more frequent and more free-form, too, with firms encouraged to create stress scenarios they believe fit their risk profiles and the characteristics of their markets. Indeed, the EBA is implementing a policy calling on banks to conduct stress tests as an ongoing risk management procedure and not merely an assessment of conditions at certain discrete moments.
  • Dodd-Frank Wall Street Reform and Consumer Protection Act. The American law expands stress testing to smaller institutions that escape the CCAR. The act also features extensive compliance and reporting procedures for swaps and other over-the-counter derivative contracts.
  • European Market Infrastructure Regulation. Although less broad in scope than Dodd-Frank, EMIR has similar reporting requirements for European institutions regarding OTC derivatives.
  • AnaCredit, Becris and FR Y-14. The European Central Bank project, known formally as the Analytical Credit Dataset, and its Federal Reserve equivalent for American banks, respectively, introduce a step change in the amount and granularity of data that needs to be reported. Information on loans and counterparties must be reported contract by contract under AnaCredit, for example. Adding to the complication and the data demands, the European framework permits national variations, including some with particularly rigorous requirements, such as the Belgian Extended Credit Risk Information System (Becris).
  • MAS 610. The core set of returns that banks file to the Monetary Authority of Singapore are being revised to require information at a far more granular level beginning next year. The number of data elements that firms have to report will rise from about 4,000 to about 300,000.
  • Economic and Financial Statistics Review (EFS). The Australian Prudential Authority’s EFS Review constitutes a wide-ranging update to the regulator’s statistical data collection demands. The sweeping changes include requests for more granular data and new forms in what would be a three-phase implementation spanning two years, requiring parallel and trial periods running through 2019 and beyond.

All of those authorities, all over the world, requiring that much more information present a daunting challenge, but they aren’t the only ones demanding that finance, risk and regulatory reporting staffs raise their games. Boards, senior executives and the real bosses – shareholders – have more stringent requirements of their own for profitability, capital efficiency, safety and competitiveness. Firms need to develop more effective data management and analysis in this cause, too.

The critical role of data management was emphasized and codified in Document 239 of the Basel Committee on Banking Supervision (BCBS), “Principles for Effective Risk Data Aggregation and Risk Reporting.” PERDARR, as it has come to be called in the industry, assigns data management a central position in the global supervisory architecture, and the influence of the 2013 paper can be seen in mandates far and wide. BCBS 239 explicitly linked a bank’s ability to gauge and manage risk with its ability to function as an integrated, cooperative unit rather than a collection of semiautonomous fiefdoms. The process of managing and reporting data, the document makes clear, enforces the link and binds holistic risk assessment to holistic operating practices. The Basel committee’s chief aim was to make sure that institutions got the big picture of their risk profile so as to reveal unhealthy concentrations of exposure that might be obscured by focusing on risk segment by segment. Just in case that idea might escape some executive’s notice, the document mentions the word “aggregate,” in one form or another, 86 times in the 89 ideas, observations, rules and principles it sets forth.

The importance of aggregating risks, and having data management and reporting capabilities that allow firms to do it, is spelled out in the first of these: ‘One of the most significant lessons learned from the global financial crisis that began in 2007 was that banks’ information technology (IT) and data architectures were inadequate to support the broad management of financial risks. Many banks lacked the ability to aggregate risk exposures and identify concentrations quickly and accurately at the bank group level, across business lines and between legal entities. Some banks were unable to manage their risks properly because of weak risk data aggregation capabilities and risk reporting practices. This had severe consequences to the banks themselves and to the stability of the financial system as a whole.’

If risk data management was an idea whose time had come when BCBS 239 was published five years ago, then RegTech should have been the means to implement the idea. RegTech was being touted even then, or soon after, as a set of solutions that would allow banks to increase the quantity and quality of the data they generate, in part because RegTech itself was quantitatively and qualitatively ahead of the hardware and software with which the industry had been making do. There was just one ironic problem: Many of the RegTech solutions on the market at the time were highly specialized and localized products and services from small providers. That encouraged financial institutions to approach data management deficiencies gap by gap, project by project, perpetuating the compartmentalized, siloed thinking that was the scourge of regulators and banks alike after the global crisis. The one-problem-at-a-time approach also displayed to full effect another deficiency of silos: a tendency for work to be duplicated, with several departments each producing the same information, often in different ways and with different results. That is expensive and time consuming, of course, and the inconsistencies that are likely to crop up make the data untrustworthy for regulators and for executives within the firm that are counting on it.

Probably the most critical feature of a well thought-out solution is a dedicated, focused and central FRR data warehouse that can chisel away at the barriers between functions, even at institutions that have been slow to abandon a siloed organizational structure reinforced with legacy systems.

FRR

With :

  • E : Extract
  • L : Load
  • T : Transform Structures
  • C : Calculations
  • A : Aggregation
  • P : Presentation

 

Click here to access Wolters Kluwer’s White Paper

 

 

Perspectives on the next wave of cyber

Financial institutions are acutely aware that cyber risk is one of the most significant perils they face and one of the most challenging to manage. The perceived intensity of the threats, and Board level concern about the effectiveness of defensive measures, ramp up continually as bad actors increase the sophistication, number, and frequency of their attacks.

Cyber risk management is high on or at the top of the agenda for financial institutions across the sector globally. Highly visible attacks of increasing insidiousness and sophistication are headline news on an almost daily basis. The line between criminal and political bad actors is increasingly blurred with each faction learning from the other. In addition, with cyberattack tools and techniques becoming more available via the dark web and other sources, the population of attackers continues to increase, with recent estimates putting the number of cyberattackers globally in the hundreds of thousands.

Cyber offenses against banks, clearers, insurers, and other major financial services sector participants will not abate any time soon. Looking at the velocity and frequency of attacks, the motivation for cyberattack upon financial services institutions can be several hundred times higher than for non-financial services organizations.

Observing these developments, regulators are prescribing increasingly stringent requirements for cyber risk management. New and emerging regulation will force changes on many fronts and will compel firms to demonstrate that they are taking cyber seriously in all that they do. However, compliance with these regulations will only be one step towards assuring effective governance and control of institutions’ Cyber Risk.

We explore the underlying challenges with regard to cyber risk management and analyze the nature of increasingly stringent regulatory demands. Putting these pieces together, we frame five strategic moves which we believe will enable businesses to satisfy business needs, their fiduciary responsibilities with regard to cyber risk, and regulatory requirements:

  1. Seek to quantify cyber risk in terms of capital and earnings at risk.
  2. Anchor all cyber risk governance through risk appetite.
  3. Ensure effectiveness of independent cyber risk oversight using specialized skills.
  4. Comprehensively map and test controls, especially for third-party interactions.
  5. Develop and exercise major incident management playbooks.

These points are consistent with global trends for cyber risk management. Further, we believe that our observations on industry challenges and the steps we recommend to address them are applicable across geographies, especially when considering prioritization of cyber risk investments.

FIVE STRATEGIC MOVES

The current environment poses major challenges for Boards and management. Leadership has to fully understand the cyber risk profile the organization faces to simultaneously protect the institution against everchanging threats and be on the front foot with regard to increasing regulatory pressures, while prioritizing the deployment of scarce resources. This is especially important given that regulation is still maturing and it is not yet clear how high the compliance bars will be set and what resources will need to be committed to achieve passing grades.

With this in mind, we propose five strategic moves which we believe, based on our experience, will help institutions position themselves well to address existing cyber risk management challenges.

1) Seek to quantify cyber risk in terms of capital and earnings at risk

Boards of Directors and all levels of management intuitively relate to risks that are quantified in economic terms. Explaining any type of risk, opportunity, or tradeoff relative to the bottom line brings sharper focus to the debate.

For all financial and many non-financial risks, institutions have developed methods for quantifying expected and unexpected losses in dollar terms that can readily be compared to earnings and capital. Further, regulators have expected this as a component of regulatory and economic capital, CCAR, and/or resolution and recovery planning. Predicting losses due to Cyber is particularly difficult because it consists of a combination of direct, indirect, and reputational elements which are not easy to quantify. In addition, there is limited historical cyber loss exposure data available to support robust cyber risk quantification.

Nevertheless, institutions still need to develop a view of their financial exposures of cyber risk with different levels of confidence and understand how this varies by business line, process, or platform. In some cases, these views may be more expert based, using scenario analysis approaches as opposed to raw statistical modeling outputs. The objectives are still the same – to challenge perspectives as to

  • how much risk exposure exists,
  • how it could manifest within the organization,
  • and how specific response strategies are reducing the institution’s inherent cyber risk.

2) Anchor all cyber risk governance through risk appetite

Regulators are specifically insisting on the establishment of a cyber risk strategy, which is typically shaped by a cyber risk appetite. This should represent an effective governance anchor to help address the Board’s concerns about whether appropriate risks are being considered and managed effectively.

Setting a risk appetite enables the Board and senior management to more deeply understand exposure to specific cyber risks, establish clarity on the Cyber imperatives for the organization, work out tradeoffs, and determine priorities.

Considering cyber risk in this way also enables it to be brought into a common framework with all other risks and provides a starting point to discuss whether the exposure is affordable (given capital and earnings) and strategically acceptable.

Cyber risk appetite should be cascaded down through the organization and provide a coherent management and monitoring framework consisting of

  • metrics,
  • assessments,
  • and practical tests or exercises

at multiple levels of granularity. Such cascading establishes a relatable chain of information at each management level across business lines and functions. Each management layer can hold the next layer more specifically accountable. Parallel business units and operations can have common standards for comparing results and sharing best practices.

Finally, Second and Third Line can have focal points to review and assure compliance. A risk appetite chain further provides a means for the attestation of the effectiveness of controls and adherence to governance directives and standards.

Where it can be demonstrated that risk appetite is being upheld to procedural levels, management will be more confident in providing the attestations that regulators require.

cyber1

3) Ensure effectiveness of independent cyber risk oversight using specialized skills

From our perspective, firms face challenges when attempting to practically fit cyber risk management into a “Three Lines of Defense” model and align cyber risk holistically within an enterprise risk management framework.

CROs and risk management functions have traditionally developed specialized skills for many risk types, but often have not evolved as much depth on IT and cyber risks. Organizations have overcome this challenge by weaving risk management into the IT organization as a First Line function.

In order to more clearly segregate the roles between IT, business, and Information Security (IS), the Chief Information Security Officer (CISO) and the IS team will typically need to be positioned as a « 1.5 Line of Defense » position. This allows an Information Security group to provide more formal oversight and guidance on the cyber requirements and to monitor day-today compliance across business and technology teams.

Further independent risk oversight and audit is clearly needed as part of the Third Line of Defense. Defining what oversight and audit means becomes more traceable and tractable when specific governance mandates and metrics from the Board down are established.

Institutions will also need to deal with the practical challenge of building and maintaining Cyber talent that can understand the business imperatives, compliance requirements, and associated cyber risk exposures.

At the leadership level, some organizations have introduced the concept of a Risk Technology Officer who interfaces with the CISO and is responsible for integration of cyber risk with operational risk.

4) Comprehensively map and test controls, especially for the third party interactions

Institutions need to undertake more rigorous and more frequent assessments of cyber risks across operations, technology, and people. These assessments need to test

  • the efficacy of surveillance,
  • the effectiveness of protection and defensive controls,
  • the responsiveness of the organization,
  • and the ability to recover

in a manner consistent with expectations of the Board.

Given the new and emerging regulatory requirements, firms will need to pay closer attention to the ongoing assessment and management of third parties. Third parties need to be tiered based on their access and interaction with the institution’s high value assets. Through this assessment of process, institutions need to obtain a more practical understanding of their ability to get early warning signals against cyber threats. In a number of cases, a firm may choose to outsource more IT or data services to third party providers (e.g., Cloud) where they consider that this option represents a more attractive and acceptable solution relative to the cost or talent demands associated with maintaining Information Security in-house for certain capabilities. At the same time, the risk of third party compromise needs to be fully understood with respect to the overall risk appetite.

cyber3

5) Develop and exercise incident management playbooks

A critical test of an institution’s cyber risk readiness is its ability to quickly and effectively respond when a cyberattack occurs.

As part of raising the bar on cyber resilience, institutions need to ensure that they have clearly documented and proven cyber incident response plans that include

  • a comprehensive array of attack scenarios,
  • clear identification of accountabilities across the organization,
  • response strategies,
  • and associated internal and external communication scenarios.

Institutions need to thoroughly test their incident response plan on an ongoing basis via table top exercises and practical drills. As part of a table top exercise, key stakeholders walk through specific attack scenarios to test their knowledge of response strategies. This exercise provides an avenue for exposing key stakeholders to more tangible aspects of cyber risk and their respective roles in the event of a cyberattack. It also can reveal gaps in specific response processes, roles, and communications that the institution will need to address.

Last but not least, incident management plans need to be reviewed and refined based on changes in the overall threat landscape and an assessment of the institution’s cyber threat profile; on a yearly or more frequent basis depending on the nature and volatility of the risk for a given business line or platform.

CONCLUSION

Cyber adversaries are increasingly sophisticated, innovative, organized, and relentless in developing new and nefarious ways to attack institutions. Cyber risk represents a relatively new class of risk which brings with it the need to grasp the often complex technological aspects, social engineering factors, and changing nature of Operational Risk as a consequence of cyber.

Leadership has to understand the threat landscape and be fully prepared to address the associated challenges. It would be impractical to have zero tolerance to cyber risk, so institutions will need to determine their risk appetite with regard to cyber, and consequently, make direct governance, investment, and operational design decisions.

The new and emerging regulations are a clear directive to financial institutions to keep cyber risk at the center of their enterprise-wide business strategy, raising the overall bar for cyber resilience. The associated directives and requirements across the many regulatory bodies represent a good and often strong basis for cyber management practices but each institution will need to further ensure that they are tackling cyber risk in a manner fully aligned with the risk management strategy and principles of their firm. In this context, we believe the five moves represent multiple strategically important advances almost all financial services firms will need to make to meet business security, resiliency, and regulatory requirements.

cyber2

click here to access mmc’s cyber handbook

 

 

The Global Risks Landscape 2019

Is the world sleepwalking into a crisis? Global risks are intensifying but the collective will to tackle them appears to be lacking. Instead, divisions are hardening. The world’s move into a new phase of strongly state-centred politics, noted in last year’s Global Risks Report, continued throughout 2018. The idea of “taking back control”— whether domestically from political rivals or externally from multilateral or supranational organizations — resonates across many countries and many issues. The energy now expended on consolidating or recovering national control risks weakening collective responses to emerging global challenges. We are drifting deeper into global problems from which we will struggle to extricate ourselves.

During 2018, macroeconomic risks moved into sharper focus. Financial market volatility increased and the headwinds facing the global economy intensified. The rate of global growth appears to have peaked: the latest International Monetary Fund (IMF) forecasts point to a gradual slowdown over the next few years. This is mainly the result of developments in advanced economies, but projections of a slowdown in China—from 6.6% growth in 2018 to 6.2% this year and 5.8% by 2022—are a source of concern. So too is the global debt burden, which is significantly higher than before the global financial crisis, at around 225% of GDP. In addition, a tightening of global financial conditions has placed particular strain on countries that built up dollar-denominated liabilities while interest rates were low.

Geopolitical and geo-economic tensions are rising among the world’s major powers. These tensions represent the most urgent global risks at present. The world is evolving into a period of divergence following a period of globalization that profoundly altered the global political economy. Reconfiguring the relations of deeply integrated countries is fraught with potential risks, and trade and investment relations among many of the world’s powers were difficult during 2018.

Against this backdrop, it is likely to become more difficult to make collective progress on other global challenges—from protecting the environment to responding to the ethical challenges of the Fourth Industrial Revolution. Deepening fissures in the international system suggest that systemic risks may be building. If another global crisis were to hit, would the necessary levels of cooperation and support be forthcoming? Probably, but the tension between the globalization of the world economy and the growing nationalism of world politics is a deepening risk.

Environmental risks continue to dominate the results of our annual Global Risks Perception Survey (GRPS). This year, they accounted for three of the top five risks by likelihood and four by impact. Extreme weather was the risk of greatest concern, but our survey respondents are increasingly worried about environmental policy failure: having fallen in the rankings after Paris, “failure of climate-change mitigation and adaptation” jumped back to number two in terms of impact this year. The results of climate inaction are becoming increasingly clear. The accelerating pace of biodiversity loss is a particular concern. Species abundance is down by 60% since 1970. In the human food chain, biodiversity loss is affecting health and socioeconomic development, with implications for well-being, productivity, and even regional security.

Technology continues to play a profound role in shaping the global risks landscape. Concerns about data fraud and cyber-attacks were prominent again in the GRPS, which also highlighted a number of other technological vulnerabilities: around two-thirds of respondents expect the risks associated with fake news and identity theft to increase in 2019, while three-fifths said the same about loss of privacy to companies and governments. There were further massive data breaches in 2018, new hardware weaknesses were revealed, and research pointed to the potential uses of artificial intelligence to engineer more potent cyberattacks. Last year also provided further evidence that cyber-attacks pose risks to critical infrastructure, prompting countries to strengthen their screening of cross-border partnerships on national grounds.

The importance of the various structural changes that are under way should not distract us from the human side of global risks. For many people, this is an increasingly anxious, unhappy and lonely world. Worldwide, mental health problems now affect an estimated 700 million people. Complex transformations— societal, technological and work-related—are having a profound impact on people’s lived experiences. A common theme is psychological stress related to a feeling of lack of control in the face of uncertainty. These issues deserve more attention: declining psychological and emotional wellbeing is a risk in itself—and one that also affects the wider global risks landscape, notably via impacts on social cohesion and politics.

Another set of risks being amplified by global transformations relate to biological pathogens. Changes in how we live have increased the risk of a devastating outbreak occurring naturally, and emerging technologies are making it increasingly easy for new biological threats to be manufactured and released either deliberately or by accident. The world is badly under-prepared for even modest biological threats, leaving us vulnerable to potentially huge impacts on individual lives, societal well-being, economic activity and national security. Revolutionary new biotechnologies promise miraculous advances, but also create daunting challenges of oversight and control—as demonstrated by claims in 2018 that the world’s first genemodified babies had been created.

Rapidly growing cities and ongoing effects of climate change are making more people vulnerable to rising sea levels. Two-thirds of the global population is expected to live in cities by 2050 and already an estimated 800 million people live in more than 570 coastal cities vulnerable to a sea-level rise of 0.5 metres by 2050. In a vicious circle, urbanization not only concentrates people and property in areas of potential damage and disruption, it also exacerbates those risks— for example by destroying natural sources of resilience such as coastal mangroves and increasing the strain on groundwater reserves. Intensifying impacts will render an increasing amount of land uninhabitable. There are three main strategies for adapting to rising sea-levels:

  1. engineering projects to keep water out,
  2. naturebased defences,
  3. and peoplebased strategies, such as moving households and businesses to safer ground or investing in social capital

to make flood-risk communities more resilient.

In this year’s Future Shocks section, we focus again on the potential for threshold effects that could trigger dramatic deteriorations and cause cascading risks to crystallize with dizzying speed. Each of the 10 shocks we present is a “what-if” scenario—not a prediction, but a reminder of the need to think creatively about risk and to expect the unexpected. Among the topics covered this year are

  • quantum cryptography,
  • monetary populism,
  • affective computing
  • and the death of human rights.

In the Risk Reassessment section, experts share their insights about how to manage risks. John Graham writes about weighing the trade-offs between different risks, and András Tilcsik and Chris Clearfield write about how managers can minimize the risk of systemic failures in their organizations.

And in the Hindsight section, we revisit three of the topics covered in previous reports:

  • food security,
  • civil society
  • and infrastructure investment.

wef1

wef2

click here to access wef-mmc-zurich’s global risks report 2019

 

Transform Your Business With Operational Decision Automation

Decisioning Applications Bring The Value Of Operational Decisions To Light

Businesses face the imperative to transform business from analog to digital due to intense competition for increasingly demanding and digitally connected customers. The imperative to transform has ushered in a new era of decisioning applications in which every operational decision an organization makes can be considered a business asset. New applications inform and advance customer experience and drive operational actions in real time through automation. These applications are at the forefront of the effort to streamline operations and help organizations take the right action at the right time near-instantaneously.

Achieve Digital Goals With Automated Operational Decision Making

Automating decision life cycles allows firms to manage the fast changes required in increasingly digitized business processes. Automation of operational decisions is crucial to meeting digital goals: More than three-quarters of decision makers say it is important to their digital strategy —and close to half say it is very important.

« The Share of Decisions that are Automated will Increase Markedly in two Years »

The importance of automated operational decision making to digital strategy will lead to a sharp increase of automation in the near term. Today, about one-third of respondents say they have the majority of their operational decisions fully or partially automated. In two years, that group will double.

Use Cases For Automated Decisions Span The Customer Lifecycle But Current Focus Is On Early Stages

To improve the operational aspects of customer experience —and to reap the business benefits that come with delighting customers — firms align automated decision use cases to the customer lifecycle. At least some firms have expanded their share of automated operational decision making to include touchpoints across the customer lifecycle, from the discover phase all the way to the engage phase. However, our survey found that the majority have yet to implement automated decisions as fully in later stages.

Top Challenges Will Intensify With Rapid Expansion Of New Decisioning Tools

Firms are experiencing middling success with current decision automation tools. Only 22% are very satisfied with their decisioning software today. Misgivings with today’s tools include inability to integrate with current systems or platforms, high cost, and lack of consistency across channels and processes.

The growth of real-time automation use cases and the number of technologies brought on to handle them will exacerbate existing challenges with complexity and cost.

Decision Makers Recognize High Value In Decisioning Platforms That Work In Real Time

Decision makers face significant implementation and cost challenges on their path to automated operational decisions. As a result, getting the greatest business value for the power of their automation tools is top of mind.

« Eighty-one percent of Decision Makers say a Platform with Real-Time Decision-to-Action Cycles would be Valuable or Very Valuable to Achieving Digital Transformation Goals. »

With better, targeted decisions based on real-time analytics, companies have the potential to acquire better customers, improve the operations that serve them, and retain them longer.

decision automatization

click here to access forresters’s research report

How to seize the Open Banking opportunity

What is Open Banking and why does it matter?

The UK has long been recognised as a global leader in banking. The industry plays a critical role domestically, enabling the day-to-day flow of money and management of risk that are essential for individuals and businesses.

It is also the most internationally competitive industry in the UK, providing the greatest trade surplus of any exporting industry. The UK has a mature and sophisticated banking market with leading Banks, FinTechs and Regulators. However, with fundamental technological, demographic, societal and political changes underway, the industry needs to transform itself in order to effectively serve society and remain globally relevant.

The industry faces a number of challenges. These include the fact that banking still suffers from a poor reputation and relatively low levels of trust when compared to other industries. Many of the incumbents are still struggling to modernise their IT platforms and to embrace digital in a way that fundamentally changes the cost base and the way customers are served.

There are also growing service gaps in the industry, with 16m people trapped in the finance advice gap. In the face of these challenges, Open Banking provides an opportunity to

  • open up the banking industry,
  • ignite innovation to tackle some of these issues
  • and radically enhance the public’s interaction and experience with the financial services industry.

A wave of new challenger banks have entered the market with these opportunities at the heart of their propositions. However, increased competition is no longer the only objective of Open Banking.

Open Banking regulation has evolved from the original intent

The UK started introducing an Open Banking Standard in 2016 to make the banking sector work harder for the benefit of consumers. The implementation of the standard was guided by recommendations from the Open Banking Working Group, made up of banks and industry groups and co-chaired by the Open Data Institute and Barclays. It had a focus on how data could be used to “help people to transact, save, borrow, lend and
invest their money”. The standard’s framework sets out how to develop a set of standards, tools, techniques and processes that will stimulate competition and innovation in the country’s financial sector.

While the UK was developing Open Banking, the European Parliament adopted the revised payment services directive (PSD2) to make it easier, faster, and less expensive for customers to pay for goods and services, by promoting innovation (especially by third-party providers). PSD2 acknowledges the rise of payment-related FinTechs and aims to create a level playing field for all payment service providers while ensuring enhanced security and strong customer protection. PSD2 requires all payment account providers across the EU to provide third-party access.

open banking 1

While this does not require an open standard, PSD2 does provide the legal framework within which the Open Banking standard in the UK and future efforts at creating other national Open Banking standards in Europe will have to operate. The common theme within these initiatives is the recognition that individual customers have the right to provide third parties with access to their financial data. This is usually done in the name of

  • increased competition,
  • accelerating technology development of new products and services,
  • reducing fraud
  • and bringing more people into a financially inclusive environment.

Although the initial objectives of the Open Banking standards were to increase competition in banking and increase current account switching, the intent is continuingly evolving with a broader focus on areas including:

  • reduced overdraft fees,
  • improved customer service,
  • greater control of data
  • and increased financial inclusion.

Whilst there is little argument that the UK leads the way in Open Banking, it is by no means doing so alone. Many other countries are looking carefully at the UK experience to understand how a local implementation might benefit from some of the issues experienced during the UK’s preparation and ‘soft launch’ in January 2018. There are many informal networks around the world, which link regulators, FinTechs and banks to facilitate the sharing of information from one market to another. Countries around the world are at various stages of maturity in implementing Open Banking. The UK leads as the only country to have legislated and built a development framework to support the regulations, enabling it to be advanced in bringing new products and services to market as a result. However, a number of other countries are progressing rapidly towards their own development of Open Banking. In a second group sit the EU, Australia and Mexico, which have taken significant steps in legislation and implementation. Canada, Hong Kong, India, Japan, New Zealand, Singapore, and the US are all making progress in preparing their respective markets for Open Banking initiatives.

open banking 2

One danger in any international shift in thinking, such as Open Banking, is that technology overtakes the original intention. The ‘core technology’ here is open APIs and they feature in all the international programmes, even when an explicit ‘Open Banking’ label is not applied. In a post-PSD2 environment, the primary responsibility for security risks will lie with payment service providers. Vulnerability to data security breaches may increase in line with the number of partners interacting via the APIs.

The new EU General Data Protection Regulation (GDPR) requires protecting customer data privacy as well as capturing and evidencing customer consent, with potentially steep penalties for breaches. Payment service providers must therefore ensure that comprehensive security measures are in place to protect the confidentiality and integrity of customers’ security credentials, assets and data.

open banking 3

click here to access pwc’s detailed report

Banks sailing in uncertain waters

The decision-making process apparent paradox

Corporate decision-making processes are driven by seemingly opposing forces.

On the  one hand, the human urge to dispose of instruments emerges in order

  • to understand context, specific self-direction
  • and to implement the actions required for following the plotted course.

On the other hand, the exhortation to keep the mind open

  • to an array of possible future scenarios,
  • to imagine and grasp the implications of the various possible trajectories,
  • to plot alternative courses according to the obstacles and opportunities encountered, that could lead to landing places other than those contemplated originally.

Needs that are intertwined as never before whenever the decision-maker operates in an area such as the banking sector, that is characterised by extremely pervasive regulatory requirements concerning the

  • maintenance and use of capital,
  • liquidity management,
  • checks on lending and distribution policies,

and that is structurally exposed to the volatility of the macroeconomic context and financial markets, greatly increasing the range of possible scenarios.

Thus, it is far from surprising or infrequent that one of the most common questions that CEOs ask the technical structures responsible for budgeting and risk planning is: ‘what if’? (‘what would happen if…?’). The problem is that, in the last few years, the ‘ifs’ at hand have rapidly multiplied, as there has been an exponential increase in the controlling variables for which feedback is required:

  • Net Interest Income (NII);
  • Cost Income ratio (C/I);
  • Return On Equity (ROE);
  • Non Performing Exposure (NPE) Ratio;
  • Liquidity Coverage Ratio (LCR);
  • Expected Credit Loss (ECL);
  • Common Equity Tier 1 (CET1) ratio,

to cite but a few among the most widespread. Planning has turned into an interdisciplinary and convoluted exercise, an issue hard to solve for CFOs and CROs in particular (naturally, should they not operate in close cooperation).

This greater complexity can result in the progressive loss of quality of the banks’ decision-making process, more often than not based on an incomplete information framework, whenever some controlling variables are unavailable, or even incorrect when there is an actual lack of information, specialist expertise and/or the instruments required for the modelling of events.

Partial mitigating circumstances include the fact that such events, aside from being numerous, are interdependent in their impact on the bank’s results and are particularly heterogeneous. These can in fact be exogenous (turbulence and interference along the way) or endogenous (the actions that the helmsman and the crew implement during navigation).

In the first case, these events are beyond the control of those responsible for the decision-making process, determined by the evolution of the market conditions and/or the choices of institutional subjects. As such, they are often hard to predict in their likelihood of occurrence, intensity, timing and duration. By nature, such phenomena are characterised by complex interactions, that make it crucial, albeit arduous, to comprehend the cause-effect mechanisms governing them. Lastly, their relevance is not absolute, but relative, in that it depends on the degree of reactivity of the bank’s business model and budgetary structure to the single risk factors to which the market value of the banks’ assets is exposed.

Conversely, in the case of endogenous events, uncertainty is more correlated to the ability of the bank’s top management

  • to quantify the level of ambition of the business actions,
  • to assess their multiple implications,
  • and specifically, to the bank’s actual ability to implement them within requested time frames and terms.

The taxonomy of banking strategic planning

Although these complexities are increasingly obvious, many banks still remain convinced about getting started on their respective courses with certainty, exposing themselves to a range of risks that can restrict or irreversibly compromise the efficacy of the decision-making processes. Some institutions are indeed persuaded that an ‘expert-based’ approach that has always characterised their planning methodologies shall continue to be sufficient and appropriate for steering the bank, also in future.

History teaches us that things have not always worked out that way. These actors have yet to understand that it has now become vital to foster the evolution of the planning process towards a model relying upon analytical methodologies and highly sophisticated and technological instruments (risk management, econometrics, statistics, financial engineering, …), making them available to those that have always considered experience, business knowledge and budgetary dynamics to be privileged instruments for making decisions.

Second mistake: many banks believe the uncertainty analysis to be wasteful and redundant for the purposes of planning since, ultimately, the allocation of objectives is (and will remain) based on assumptions and uniquely identified scenarios. In this case, the risk lies in failing to understand that, in actual fact, a broader analysis of possible scenarios contributes to better delineating the assigned objectives, by separating the external conditions from the contribution provided by internal actions. Moreover, testing various hypotheses and combinations of cases makes it easier to calibrate the ‘level of managerial ambition’, in line with the actual potential of the organisational structure and with the full involvement of the business functions responsible for attaining the corporate objectives.

The intersection of these two misreadings of the context results in a different positioning of the bank, with the relative risks and opportunities.

Models

ILLUMINATED

The planning process is built upon analytical data and models developed with the contribution of subject matter experts of different origins, which allows to consider the impacts of a specific scenario on the bank’s budget simultaneously and coherently. Nevertheless, not only does it improve the planning of a specific item, but it disposes of appropriate instruments to switch to a multi-scenario perspective and investigate the relevant scenarios for management, appraising the volatility regarding the expected results. This transition is extremely delicate: it entails a change in the way prospective information is produced by the technical functions and subsequently channelled to the top management and board of directors. In this context, the bank is governed via the analysis of deterministic scenarios and the statistical analysis of the probability distributions of the variables of interest. Leveraging this set of information (much more abundant and articulated than the traditional one) targets, risk propensity levels and relative alert and tolerance thresholds are established; business owners are provided not only with the final objectives, but also with details concerning the key risk factors (endogenous and exogenous alike) that might represent critical or success factors and the respective probabilities of occurrence.

DELUDED

The budget planning process is characterised by the prevalence of an expert-based approach (with a limited capacity of integrating quantitative models and methodologies, in that not always all budget items are developed by relying on the necessary instruments and expertise) and aimed at forecasting a single baseline scenario (the one under which the budget objectives are to be formalised, then articulated on the organisational units and business combinations).

ENLIGHTENED

The budgetary planning process is very accurate and incorporates specialist expertise (often cross-functional) required to understand and transmit the interactions across the managerial phenomena so as to ensure a full grasp of the bank’s ongoing context. The onus is chiefly on the ability to explain the phenomena inside the bank without prejudice to the external baseline scenario, that is ‘given’ by definition.

MISSING

The planning process attempts to consider the impact of alternative scenarios as compared to the baseline scenario, however, it is implemented on the basis of imprecise or incomplete modelling, in that developed without the analytical foundations and instruments required to appraise the consistency and the degree of likelihood of these scenarios, useful tools to sustain such a serious consideration. The focus remains on the comparison across the results produced under diverse conditions, while taking into account the approximations used.

Click here to access Prometeia’s white paper

Is Your Company Ready for Artificial Intelligence?

Overview

Companies are rushing to invest in and pursue initiatives that use artificial intelligence (AI). Some hope to find opportunity to transform their business processes and gain competitive advantage and others are concerned about falling behind the technology curve. But the reality is that many AI initiatives don’t work as planned, largely because companies are not ready for AI.

However, it is possible to leverage AI to create real business value. The key to AI success is ensuring the organization is ready by having the basics in place, particularly structured analytics and automation. Other elements of AI readiness include

  • executive engagement and support,
  • data excellence,
  • organizational capabilities,
  • and completion of AI pilots.

Key Takeaways

There is tremendous AI hype and investment. Artificial intelligence is software that can make decisions without explicit instructions for each scenario, including an ability to learn and improve over time. The term “machine learning” is often used interchangeably with AI, but machine learning is just one approach to AI, though it is currently the approach generating the most attention. Today in most business situations where AI is relevant, machine learning is likely to be employed.

The hype around AI is tremendous and has accelerated in the last few years. It is rare to read a business-related article these days that doesn’t mention AI.

The AI hype is being accompanied by massive investments from corporations (like Amazon, Google, and Uber), as well as from venture capital firms.

Because organizations often pursue AI without fully understanding it or having the basics in place, many AI initiatives fail. The AI fervor is causing companies to hurriedly pursue AI. There is a rush to capitalize on AI, but significant frustration when it comes to actually delivering AI success. AI initiatives are often pursued for the wrong reasons and many AI initiatives experience pitfalls. Some key pitfalls are:

  • Expensive partnerships between large companies and startups without results.
  • Impenetrable black box systems.
  • Open source toolkits without programmers to code.

The root cause for these failures often boils down to companies confusing three different topics:

  • automation,
  • structured analytics,
  • and artificial intelligence.

AI1

Despite the challenges, some organizations are experiencing success with AI. While the hype around AI is overblown, there are organizations having success by leveraging AI to create business value, particularly when AI is used for customer support and in the back office.

The key to AI success is first having the basics in place. In assessing AI successes and failures, the presenters drew three conclusions:

  1. There is a huge benefit from first getting the basics right: automation and structured analytics are prerequisites to AI.
  2. The benefits from AI are greater once these basics have been done right.
  3. Organizations are capable of working with AI at scale only when the basics have been done at scale.

GETTING THE BASICS RIGHT

The most important basics for AI are automation and structured analytics.

  • Automation: In most businesses there are many examples of data processes that can be automated. In many of these examples, there is no point having advanced AI if the basics are not yet automated.
  • Structured analytics means applying standard statistical techniques to well-structured data. In most companies there is huge value in getting automation and structured analytics right before getting to more complicated AI.

Examples of how businesses use structured analytics and automation include:

  • Competitor price checking. A retailer created real-time pricing intelligence by automatically scraping prices from competitors’ websites.
  • Small business cash flow lending product. Recognizing the need for small business customers to acquire loans in days, not weeks, a bank created an online lending product built on structured analytics.

BENEFITS WHEN THE BASICS ARE IN PLACE

Once the basics of structured analytics and automation are in place, organizations see more value from AI—when AI is used in specific situations.

AI2

Examples of how adding AI on top of the basics helps improve business results are:

  • New product assortment decisions. Adding AI on top of structured analytics allowed a retailer to predict the performance of new products for which there was no historic data. With this information, the retailer was able to decide whether or not to add the product to the stores.
  • Promotions forecasting. A retailer was able to improve forecasting of promotional sales using AI. Within two months of implementation, machine learning was better than the old forecasts plus the corrections made by the human forecasting team.
  • Customer churn predictions. A telephone company used AI and structured analytics to identify how to keep at-risk customers from leaving.
  • Defect detection. An aerospace manufacturer used AI to supplement human inspection and improve defect detection.

AI AT SCALE AFTER THE BASICS ARE AT SCALE

Once an organization proves it can work with automation and structured analytics at scale, it is ready for AI at scale. Readiness for AI at scale goes beyond completing a few AI pilots in defined but isolated areas of capability; the basics need to be in use across the business.

Before undertaking AI, organizations need to assess their AI readiness. To be successful, organizations need to be ready for AI. Readiness consists of multiple elements, including

  • executive engagement and support,
  • data excellence,
  • organizational capabilities,
  • and an analytical orientation.

Organizations often struggle with data excellence and organizational capabilities.

AI3

Click here to access HBR and SAS article collection