Uncertainty Visualization

Uncertainty is inherent to most data and can enter the analysis pipeline during the measurement, modeling, and forecasting phases. Effectively communicating uncertainty is necessary for establishing scientific transparency. Further, people commonly assume that there is uncertainty in data analysis, and they need to know the nature of the uncertainty to make informed decisions.

However, understanding even the most conventional communications of uncertainty is highly challenging for novices and experts alike, which is due in part to the abstract nature of probability and ineffective communication techniques. Reasoning with uncertainty is unilaterally difficult, but researchers are revealing how some types of visualizations can improve decision-making in a variety of diverse contexts,

  • from hazard forecasting,
  • to healthcare communication,
  • to everyday decisions about transit.

Scholars have distinguished different types of uncertainty, including

  • aleatoric (irreducible randomness inherent in a process),
  • epistemic (uncertainty from a lack of knowledge that could theoretically be reduced given more information),
  • and ontological uncertainty (uncertainty about how accurately the modeling describes reality, which can only be described subjectively).

The term risk is also used in some decision-making fields to refer to quantified forms of aleatoric and epistemic uncertainty, whereas uncertainty is reserved for potential error or bias that remains unquantified. Here we use the term uncertainty to refer to quantified uncertainty that can be visualized, most commonly a probability distribution. This article begins with a brief overview of the common uncertainty visualization techniques and then elaborates on the cognitive theories that describe how the approaches influence judgments. The goal is to provide readers with the necessary theoretical infrastructure to critically evaluate the various visualization techniques in the context of their own audience and design constraints. Importantly, there is no one-size-fits-all uncertainty visualization approach guaranteed to improve decisions in all domains, nor even guarantees that presenting uncertainty to readers will necessarily improve judgments or trust. Therefore, visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process.

Uncertainty Visualization Design Space

There are two broad categories of uncertainty visualization techniques. The first are graphical annotations that can be used to show properties of a distribution, such as the mean, confidence/credible intervals, and distributional moments.

Numerous visualization techniques use the composition of marks (i.e., geometric primitives, such as dots, lines, and icons) to display uncertainty directly, as in error bars depicting confidence or credible intervals. Other approaches use marks to display uncertainty implicitly as an inherent property of the visualization. For example, hypothetical outcome plots (HOPs) are random draws from a distribution that are presented in an animated sequence, allowing viewers to form an intuitive impression of the uncertainty as they watch.

The second category of techniques focuses on mapping probability or confidence to a visual encoding channel. Visual encoding channels define the appearance of marks using controls such as color, position, and transparency. Techniques that use encoding channels have the added benefit of adjusting a mark that is already in use, such as making a mark more transparent if the uncertainty is high. Marks and encodings that both communicate uncertainty can be combined to create hybrid approaches, such as in contour box plots and probability density and interval plots.

More expressive visualizations provide a fuller picture of the data by depicting more properties, such as the nature of the distribution and outliers, which can be lost with intervals. Other work proposes that showing distributional information in a frequency format (e.g., 1 out of 10 rather than 10%) more naturally matches how people think about uncertainty and can improve performance.

Visualizations that represent frequencies tend to be highly effective communication tools, particularly for individuals with low numeracy (e.g., inability to work with numbers), and can help people overcome various decision-making biases.

Researchers have dedicated a significant amount of work to examining which visual encodings are most appropriate for communicating uncertainty, notably in geographic information systems and cartography. One goal of these approaches is to evoke a sensation of uncertainty, for example, using fuzziness, fogginess, or blur.

Other work that examines uncertainty encodings also seeks to make looking-up values more difficult when the uncertainty is high, such as value-suppressing color pallets.

Given that there is no one-size-fits-all technique, in the following sections, we detail the emerging cognitive theories that describe how and why each visualization technique functions.

VU1

Uncertainty Visualization Theories

The empirical evaluation of uncertainty visualizations is challenging. Many user experience goals (e.g., memorability, engagement, and enjoyment) and performance metrics (e.g., speed, accuracy, and cognitive load) can be considered when evaluating uncertainty visualizations. Beyond identifying the metrics of evaluation, even the most simple tasks have countless configurations. As a result, it is hard for any single study to sufficiently test the effects of a visualization to ensure that it is appropriate to use in all cases. Visualization guidelines based on a single or small set of studies are potentially incomplete. Theories can help bridge the gap between visualizations studies by identifying and synthesizing converging evidence, with the goal of helping scientists make predictions about how a visualization will be used. Understanding foundational theoretical frameworks will empower designers to think critically about the design constraints in their work and generate optimal solutions for their unique applications. The theories detailed in the next sections are only those that have mounting support from numerous evidence-based studies in various contexts. As an overview, The table provides a summary of the dominant theories in uncertainty visualization, along with proposed visualization techniques.

UV2

General Discussion

There are no one-size-fits-all uncertainty visualization approaches, which is why visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process. This article overviews many of the common uncertainty visualization techniques and the cognitive theory that describes how and why they function, to help designers think critically about their design choices. We focused on the uncertainty visualization methods and cognitive theories that have received the most support from converging measures (e.g., the practice of testing hypotheses in multiple ways), but there are many approaches not covered in this article that will likely prove to be exceptional visualization techniques in the future.

There is no single visualization technique we endorse, but there are some that should be critically considered before employing them. Intervals, such as error bars and the Cone of Uncertainty, can be particularly challenging for viewers. If a designer needs to show an interval, we also recommend displaying information that is more representative, such as a scatterplot, violin plot, gradient plot, ensemble plot, quantile dotplot, or HOP. Just showing an interval alone could lead people to conceptualize the data as categorical. As alluded to in the prior paragraph, combining various uncertainty visualization approaches may be a way to overcome issues with one technique or get the best of both worlds. For example, each animated draw in a hypothetical outcome plot could leave a trace that slowly builds into a static display such as a gradient plot, or animated draws could be used to help explain the creation of a static technique such as a density plot, error bar, or quantile dotplot. Media outlets such as the New York Times have presented animated dots in a simulation to show inequalities in wealth distribution due to race. More research is needed to understand if and how various uncertainty visualization techniques function together. It is possible that combining techniques is useful in some cases, but new and undocumented issues may arise when approaches are combined.

In closing, we stress the importance of empirically testing each uncertainty visualization approach. As noted in numerous papers, the way that people reason with uncertainty is non-intuitive, which can be exacerbated when uncertainty information is communicated visually. Evaluating uncertainty visualizations can also be challenging, but it is necessary to ensure that people correctly interpret a display. A recent survey of uncertainty visualization evaluations offers practical guidance on how to test uncertainty visualization techniques.

Click her to access the entire article in Handbook of Computational Statistics and Data Science

Transform Your Business With Operational Decision Automation

Decisioning Applications Bring The Value Of Operational Decisions To Light

Businesses face the imperative to transform business from analog to digital due to intense competition for increasingly demanding and digitally connected customers. The imperative to transform has ushered in a new era of decisioning applications in which every operational decision an organization makes can be considered a business asset. New applications inform and advance customer experience and drive operational actions in real time through automation. These applications are at the forefront of the effort to streamline operations and help organizations take the right action at the right time near-instantaneously.

Achieve Digital Goals With Automated Operational Decision Making

Automating decision life cycles allows firms to manage the fast changes required in increasingly digitized business processes. Automation of operational decisions is crucial to meeting digital goals: More than three-quarters of decision makers say it is important to their digital strategy —and close to half say it is very important.

« The Share of Decisions that are Automated will Increase Markedly in two Years »

The importance of automated operational decision making to digital strategy will lead to a sharp increase of automation in the near term. Today, about one-third of respondents say they have the majority of their operational decisions fully or partially automated. In two years, that group will double.

Use Cases For Automated Decisions Span The Customer Lifecycle But Current Focus Is On Early Stages

To improve the operational aspects of customer experience —and to reap the business benefits that come with delighting customers — firms align automated decision use cases to the customer lifecycle. At least some firms have expanded their share of automated operational decision making to include touchpoints across the customer lifecycle, from the discover phase all the way to the engage phase. However, our survey found that the majority have yet to implement automated decisions as fully in later stages.

Top Challenges Will Intensify With Rapid Expansion Of New Decisioning Tools

Firms are experiencing middling success with current decision automation tools. Only 22% are very satisfied with their decisioning software today. Misgivings with today’s tools include inability to integrate with current systems or platforms, high cost, and lack of consistency across channels and processes.

The growth of real-time automation use cases and the number of technologies brought on to handle them will exacerbate existing challenges with complexity and cost.

Decision Makers Recognize High Value In Decisioning Platforms That Work In Real Time

Decision makers face significant implementation and cost challenges on their path to automated operational decisions. As a result, getting the greatest business value for the power of their automation tools is top of mind.

« Eighty-one percent of Decision Makers say a Platform with Real-Time Decision-to-Action Cycles would be Valuable or Very Valuable to Achieving Digital Transformation Goals. »

With better, targeted decisions based on real-time analytics, companies have the potential to acquire better customers, improve the operations that serve them, and retain them longer.

decision automatization

click here to access forresters’s research report

Banks sailing in uncertain waters

The decision-making process apparent paradox

Corporate decision-making processes are driven by seemingly opposing forces.

On the  one hand, the human urge to dispose of instruments emerges in order

  • to understand context, specific self-direction
  • and to implement the actions required for following the plotted course.

On the other hand, the exhortation to keep the mind open

  • to an array of possible future scenarios,
  • to imagine and grasp the implications of the various possible trajectories,
  • to plot alternative courses according to the obstacles and opportunities encountered, that could lead to landing places other than those contemplated originally.

Needs that are intertwined as never before whenever the decision-maker operates in an area such as the banking sector, that is characterised by extremely pervasive regulatory requirements concerning the

  • maintenance and use of capital,
  • liquidity management,
  • checks on lending and distribution policies,

and that is structurally exposed to the volatility of the macroeconomic context and financial markets, greatly increasing the range of possible scenarios.

Thus, it is far from surprising or infrequent that one of the most common questions that CEOs ask the technical structures responsible for budgeting and risk planning is: ‘what if’? (‘what would happen if…?’). The problem is that, in the last few years, the ‘ifs’ at hand have rapidly multiplied, as there has been an exponential increase in the controlling variables for which feedback is required:

  • Net Interest Income (NII);
  • Cost Income ratio (C/I);
  • Return On Equity (ROE);
  • Non Performing Exposure (NPE) Ratio;
  • Liquidity Coverage Ratio (LCR);
  • Expected Credit Loss (ECL);
  • Common Equity Tier 1 (CET1) ratio,

to cite but a few among the most widespread. Planning has turned into an interdisciplinary and convoluted exercise, an issue hard to solve for CFOs and CROs in particular (naturally, should they not operate in close cooperation).

This greater complexity can result in the progressive loss of quality of the banks’ decision-making process, more often than not based on an incomplete information framework, whenever some controlling variables are unavailable, or even incorrect when there is an actual lack of information, specialist expertise and/or the instruments required for the modelling of events.

Partial mitigating circumstances include the fact that such events, aside from being numerous, are interdependent in their impact on the bank’s results and are particularly heterogeneous. These can in fact be exogenous (turbulence and interference along the way) or endogenous (the actions that the helmsman and the crew implement during navigation).

In the first case, these events are beyond the control of those responsible for the decision-making process, determined by the evolution of the market conditions and/or the choices of institutional subjects. As such, they are often hard to predict in their likelihood of occurrence, intensity, timing and duration. By nature, such phenomena are characterised by complex interactions, that make it crucial, albeit arduous, to comprehend the cause-effect mechanisms governing them. Lastly, their relevance is not absolute, but relative, in that it depends on the degree of reactivity of the bank’s business model and budgetary structure to the single risk factors to which the market value of the banks’ assets is exposed.

Conversely, in the case of endogenous events, uncertainty is more correlated to the ability of the bank’s top management

  • to quantify the level of ambition of the business actions,
  • to assess their multiple implications,
  • and specifically, to the bank’s actual ability to implement them within requested time frames and terms.

The taxonomy of banking strategic planning

Although these complexities are increasingly obvious, many banks still remain convinced about getting started on their respective courses with certainty, exposing themselves to a range of risks that can restrict or irreversibly compromise the efficacy of the decision-making processes. Some institutions are indeed persuaded that an ‘expert-based’ approach that has always characterised their planning methodologies shall continue to be sufficient and appropriate for steering the bank, also in future.

History teaches us that things have not always worked out that way. These actors have yet to understand that it has now become vital to foster the evolution of the planning process towards a model relying upon analytical methodologies and highly sophisticated and technological instruments (risk management, econometrics, statistics, financial engineering, …), making them available to those that have always considered experience, business knowledge and budgetary dynamics to be privileged instruments for making decisions.

Second mistake: many banks believe the uncertainty analysis to be wasteful and redundant for the purposes of planning since, ultimately, the allocation of objectives is (and will remain) based on assumptions and uniquely identified scenarios. In this case, the risk lies in failing to understand that, in actual fact, a broader analysis of possible scenarios contributes to better delineating the assigned objectives, by separating the external conditions from the contribution provided by internal actions. Moreover, testing various hypotheses and combinations of cases makes it easier to calibrate the ‘level of managerial ambition’, in line with the actual potential of the organisational structure and with the full involvement of the business functions responsible for attaining the corporate objectives.

The intersection of these two misreadings of the context results in a different positioning of the bank, with the relative risks and opportunities.

Models

ILLUMINATED

The planning process is built upon analytical data and models developed with the contribution of subject matter experts of different origins, which allows to consider the impacts of a specific scenario on the bank’s budget simultaneously and coherently. Nevertheless, not only does it improve the planning of a specific item, but it disposes of appropriate instruments to switch to a multi-scenario perspective and investigate the relevant scenarios for management, appraising the volatility regarding the expected results. This transition is extremely delicate: it entails a change in the way prospective information is produced by the technical functions and subsequently channelled to the top management and board of directors. In this context, the bank is governed via the analysis of deterministic scenarios and the statistical analysis of the probability distributions of the variables of interest. Leveraging this set of information (much more abundant and articulated than the traditional one) targets, risk propensity levels and relative alert and tolerance thresholds are established; business owners are provided not only with the final objectives, but also with details concerning the key risk factors (endogenous and exogenous alike) that might represent critical or success factors and the respective probabilities of occurrence.

DELUDED

The budget planning process is characterised by the prevalence of an expert-based approach (with a limited capacity of integrating quantitative models and methodologies, in that not always all budget items are developed by relying on the necessary instruments and expertise) and aimed at forecasting a single baseline scenario (the one under which the budget objectives are to be formalised, then articulated on the organisational units and business combinations).

ENLIGHTENED

The budgetary planning process is very accurate and incorporates specialist expertise (often cross-functional) required to understand and transmit the interactions across the managerial phenomena so as to ensure a full grasp of the bank’s ongoing context. The onus is chiefly on the ability to explain the phenomena inside the bank without prejudice to the external baseline scenario, that is ‘given’ by definition.

MISSING

The planning process attempts to consider the impact of alternative scenarios as compared to the baseline scenario, however, it is implemented on the basis of imprecise or incomplete modelling, in that developed without the analytical foundations and instruments required to appraise the consistency and the degree of likelihood of these scenarios, useful tools to sustain such a serious consideration. The focus remains on the comparison across the results produced under diverse conditions, while taking into account the approximations used.

Click here to access Prometeia’s white paper