Uncertainty Visualization

Uncertainty is inherent to most data and can enter the analysis pipeline during the measurement, modeling, and forecasting phases. Effectively communicating uncertainty is necessary for establishing scientific transparency. Further, people commonly assume that there is uncertainty in data analysis, and they need to know the nature of the uncertainty to make informed decisions.

However, understanding even the most conventional communications of uncertainty is highly challenging for novices and experts alike, which is due in part to the abstract nature of probability and ineffective communication techniques. Reasoning with uncertainty is unilaterally difficult, but researchers are revealing how some types of visualizations can improve decision-making in a variety of diverse contexts,

  • from hazard forecasting,
  • to healthcare communication,
  • to everyday decisions about transit.

Scholars have distinguished different types of uncertainty, including

  • aleatoric (irreducible randomness inherent in a process),
  • epistemic (uncertainty from a lack of knowledge that could theoretically be reduced given more information),
  • and ontological uncertainty (uncertainty about how accurately the modeling describes reality, which can only be described subjectively).

The term risk is also used in some decision-making fields to refer to quantified forms of aleatoric and epistemic uncertainty, whereas uncertainty is reserved for potential error or bias that remains unquantified. Here we use the term uncertainty to refer to quantified uncertainty that can be visualized, most commonly a probability distribution. This article begins with a brief overview of the common uncertainty visualization techniques and then elaborates on the cognitive theories that describe how the approaches influence judgments. The goal is to provide readers with the necessary theoretical infrastructure to critically evaluate the various visualization techniques in the context of their own audience and design constraints. Importantly, there is no one-size-fits-all uncertainty visualization approach guaranteed to improve decisions in all domains, nor even guarantees that presenting uncertainty to readers will necessarily improve judgments or trust. Therefore, visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process.

Uncertainty Visualization Design Space

There are two broad categories of uncertainty visualization techniques. The first are graphical annotations that can be used to show properties of a distribution, such as the mean, confidence/credible intervals, and distributional moments.

Numerous visualization techniques use the composition of marks (i.e., geometric primitives, such as dots, lines, and icons) to display uncertainty directly, as in error bars depicting confidence or credible intervals. Other approaches use marks to display uncertainty implicitly as an inherent property of the visualization. For example, hypothetical outcome plots (HOPs) are random draws from a distribution that are presented in an animated sequence, allowing viewers to form an intuitive impression of the uncertainty as they watch.

The second category of techniques focuses on mapping probability or confidence to a visual encoding channel. Visual encoding channels define the appearance of marks using controls such as color, position, and transparency. Techniques that use encoding channels have the added benefit of adjusting a mark that is already in use, such as making a mark more transparent if the uncertainty is high. Marks and encodings that both communicate uncertainty can be combined to create hybrid approaches, such as in contour box plots and probability density and interval plots.

More expressive visualizations provide a fuller picture of the data by depicting more properties, such as the nature of the distribution and outliers, which can be lost with intervals. Other work proposes that showing distributional information in a frequency format (e.g., 1 out of 10 rather than 10%) more naturally matches how people think about uncertainty and can improve performance.

Visualizations that represent frequencies tend to be highly effective communication tools, particularly for individuals with low numeracy (e.g., inability to work with numbers), and can help people overcome various decision-making biases.

Researchers have dedicated a significant amount of work to examining which visual encodings are most appropriate for communicating uncertainty, notably in geographic information systems and cartography. One goal of these approaches is to evoke a sensation of uncertainty, for example, using fuzziness, fogginess, or blur.

Other work that examines uncertainty encodings also seeks to make looking-up values more difficult when the uncertainty is high, such as value-suppressing color pallets.

Given that there is no one-size-fits-all technique, in the following sections, we detail the emerging cognitive theories that describe how and why each visualization technique functions.

VU1

Uncertainty Visualization Theories

The empirical evaluation of uncertainty visualizations is challenging. Many user experience goals (e.g., memorability, engagement, and enjoyment) and performance metrics (e.g., speed, accuracy, and cognitive load) can be considered when evaluating uncertainty visualizations. Beyond identifying the metrics of evaluation, even the most simple tasks have countless configurations. As a result, it is hard for any single study to sufficiently test the effects of a visualization to ensure that it is appropriate to use in all cases. Visualization guidelines based on a single or small set of studies are potentially incomplete. Theories can help bridge the gap between visualizations studies by identifying and synthesizing converging evidence, with the goal of helping scientists make predictions about how a visualization will be used. Understanding foundational theoretical frameworks will empower designers to think critically about the design constraints in their work and generate optimal solutions for their unique applications. The theories detailed in the next sections are only those that have mounting support from numerous evidence-based studies in various contexts. As an overview, The table provides a summary of the dominant theories in uncertainty visualization, along with proposed visualization techniques.

UV2

General Discussion

There are no one-size-fits-all uncertainty visualization approaches, which is why visualization designers must think carefully about each of their design choices or risk adding more confusion to an already difficult decision process. This article overviews many of the common uncertainty visualization techniques and the cognitive theory that describes how and why they function, to help designers think critically about their design choices. We focused on the uncertainty visualization methods and cognitive theories that have received the most support from converging measures (e.g., the practice of testing hypotheses in multiple ways), but there are many approaches not covered in this article that will likely prove to be exceptional visualization techniques in the future.

There is no single visualization technique we endorse, but there are some that should be critically considered before employing them. Intervals, such as error bars and the Cone of Uncertainty, can be particularly challenging for viewers. If a designer needs to show an interval, we also recommend displaying information that is more representative, such as a scatterplot, violin plot, gradient plot, ensemble plot, quantile dotplot, or HOP. Just showing an interval alone could lead people to conceptualize the data as categorical. As alluded to in the prior paragraph, combining various uncertainty visualization approaches may be a way to overcome issues with one technique or get the best of both worlds. For example, each animated draw in a hypothetical outcome plot could leave a trace that slowly builds into a static display such as a gradient plot, or animated draws could be used to help explain the creation of a static technique such as a density plot, error bar, or quantile dotplot. Media outlets such as the New York Times have presented animated dots in a simulation to show inequalities in wealth distribution due to race. More research is needed to understand if and how various uncertainty visualization techniques function together. It is possible that combining techniques is useful in some cases, but new and undocumented issues may arise when approaches are combined.

In closing, we stress the importance of empirically testing each uncertainty visualization approach. As noted in numerous papers, the way that people reason with uncertainty is non-intuitive, which can be exacerbated when uncertainty information is communicated visually. Evaluating uncertainty visualizations can also be challenging, but it is necessary to ensure that people correctly interpret a display. A recent survey of uncertainty visualization evaluations offers practical guidance on how to test uncertainty visualization techniques.

Click her to access the entire article in Handbook of Computational Statistics and Data Science

The future of compliance – How cognitive computing is transforming the banking industry

Paradigm shift in financial services regulatory compliance

The compliance landscape has changed rapidly and dramatically over the past 15 years, with the volume and complexity of new regulations rising unabated. Financial institutions have strained to keep pace with the onslaught of legislative and regulatory changes that arose in response to improper business practices and criminal activity. These changes caused the erosion of public confidence in global credit and financial markets and in the security of our banking system.

After the financial crisis of 2008, there was a sharp increase in enforcement actions brought by federal and state regulators in a broad range of cases involving financial and securities fraud, economic sanctions violations, money laundering, bribery, corruption, market manipulation, and tax evasion, leading to violations of the Bank Secrecy Act and OFAC sanctions1 According to Forbes, Inc., aggregate fines paid by the largest global banks from 2008 through August 2014 exceeded USD 250 billion. A February 2016 report issued by Bloomberg revealed that the toll on foreign banks since the 2008 crisis has been colossal with 100,000 jobs lost, USD 63 billion in fines and penalties, and a staggering USD 420 billion dollar loss in market capitalization.

In the wake of these enforcement actions and record-breaking penalties, financial institutions are under pressure to

  • rethink,
  • restructure,
  • and retool

their risk and compliance function to operate in the current environment. With regulators, investors and boards demanding increased global transparency, risk and compliance can no longer be tackled in geographical silos. Transforming the way compliance departments operate to meet the new reality requires an investment in talent and technology.

Spending on talent continues to rise as institutions hire more and more staff to shore up already sizeable compliance teams. At the end of 2014, Citigroup reported a compliance staff of 30,000. Some boards, analysts, and investors question the exploding costs of compliance yet recognize that any effort to reduce staff without demonstrable and measureable improvements in compliance processes and technology would almost certainly be viewed negatively by regulators. Headcount alone cannot solve today’s compliance challenges. One possible solution lies in transformative technology that enables a shift in the focus of compliance staff from that of information gatherers to information analyzers. In other words, it is time for a paradigm shift in the financial services industry and the way regulatory compliance departments operate.

Cognitive computing for compliance

Cognitive systems are trained by humans and learn as they ingest and interpret new information. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. IBM® Watson® technology represents a new era in computing called cognitive computing, where systems understand the world in a way more similar to humans: through

  • senses,
  • learning
  • and experience.

Watson

  • uses natural language processing to analyze structured and unstructured data,
  • uses natural language processing to understand grammar and context,
  • understands complex questions
  • and proposes evidence-based answers,

based on supporting evidence and the quality of information found.

Cognitive computing is a natural fit for the regulatory compliance space because it can be used to accomplish the significant amount of analysis required to read and interpret regulations. The traditional process of distilling regulations into distinct requirements is a demanding and continuous undertaking. Compliance professionals must read hundreds of regulatory documents and determine which of the thousands of lines of text constitute true requirements. Given the same document to assess, different staff can arrive at different conclusions. In a manual environment, this adds another layer of issues to track while the parties resolve whether the identified text is or is not a requirement.

This work is usually performed on a continuous cycle and under the pressure of deadlines. The end-to-end process of identifying and finalizing the requirements inventory can be demanding and tedious. It is also traditionally encumbered by the heavy use of spreadsheets for tracking of regulations, requirements, internal decisions and statuses. Together, these conditions have the potential to negatively impact the work environment and can result in low morale and high turnover. Only when the human effort can shift from the tedium of manual processes (collect regulations, identify requirements, and track compliance issues through spreadsheets) to an automated solution will end-to-end visibility and transparency be realized. Cognitive computing technology can help an institution realign its approach from outdated information processing techniques to a state-of-the-art solution that enables this transformation.

IBM Watson Regulatory Compliance puts the power of cognitive computing into the hands of compliance professionals, giving them the capabilities needed to leverage data to help them manage risk and compliance requirements, and optimize data for more effective analysis. It is specifically tailored for compliance departments and offers, or in the future may offer, core functionalities that include:

  • Document ingestion
  • Requirements parsing and identification
  • Requirements decisioning and management
  • Categorization of requirements
  • Mapping of controls to requirements
  • Harmonization of risk frameworks
  • Interactive reporting and analytics
  • Automated audit trail
  • Automated requirements catalog
  • Centralized document library

Watson Regulatory Compliance is designed to help organizations use cognitive technology to transform key portions of their regulatory compliance processes that are traditionally performed manually.

IBM Cognitive

These enhancements, enabled by Watson, can potentially help an organization to reallocate resources to more value-added compliance and analytic activities for improved transparency across the compliance function.

A conceptual end-to-end approach for cognitive compliance and requirement management, to categorization, mapping of controls and standards, and analytics and reporting is presented in the following figure.

IBM Cognitive 2

Click here to access IBM’s White Paper

 

RPA – A programmatic approach to intelligent automation to scale growth, manage risk, and drive enterprise value

Business leaders and chief information officers around the world are jumping on the robotic process automation (RPA) pilot bandwagon to start their companies on the automation journey. Some RPA pilots are evaluating software designed to stitch together known technology concepts—such as screen scraping and macrobased automation—through user-friendly tools to take process automation to the next level. Other pilots are venturing into the use of machine learning and cognitive automation to unleash new business insights.

These pilots—or proof-of-concept programs—help leaders set a foundation for their understanding of RPA, while at the same time introducing new ideas for how automation can help scale operations or define new business strategies. And now the pilot was successful, and leaders are seeing the possibilities. So what happens next?

When performing RPA pilots many companies get stuck in basic automation and stop there. Other companies have basic and cognitive automation pilots going on simultaneously.

Aligning the goals of basic RPA with cognitive computing and artificial intelligence can seem improbable. But are the objectives really that different? Leaders want to use all levels of automation to

  • drive business growth,
  • manage risk,
  • and increase value.

The trick is having a strategy for getting from pilot to program, and putting in place a comprehensive structure looking beyond the RPA pilots to intelligent automation (IA) as an across-the-board investment. This ensures IA ventures become more than speculation and remain significant to the business.

  • But how can leaders ensure that IA is more than a one-time cost play?
  • How are future automation opportunities identified and evaluated for both risk and benefit?
  • How is “electronic employee” service performance monitored?
  • How do leaders ensure the optimal mix of basic, enhanced, and cognitive automation?
  • How is business continuity maintained if the IA solution fails?
  • How is
    • system security,
    • change management,
    • system processing,
    • and authentication control
  • maintained as automation risk becomes more complex?
  • How will IA be used to transform the business?

Leaders know technology is changing rapidly, and IA is a moving target. Implementing a “bullet-proof” value-based program is critical to managing the automation revolution and ensuring it delivers positive business impacts over time. Robust program management balances risk and reward with structures driving sustainable IA value. An IA program model delivers these ideals.

An Intelligent Automation program can help enhance and expedite the implementation of IA throughout an organization. Here are four critical characteristics for success:

  1. It is strategically positioned – Positioning IA on par with other business strategies as integral to enterprise objectives is the best place to start. Similar to outsourcing (OS), these dependent IA vendor relationships are treated as strategic. Global processowners (GPO) use IA to transform end-to-end services. Global teams engage in IA opportunity evaluation to ensure bad processes are not automated.
  2. It uses a “center of excellence” service model – Establishing a center of excellence (CoE) demonstrates a commitment to IA success. Focus drives effectiveness, and CoEs drive transparency to IA results. CoEs have varied formats (virtual, centralized, regional, etc.) and engage cross-functional teams. CoE governance guides IA strategy and validates results. Clarifying decision rights balances governance and operations accountabilities. Incorporating IA support roles (e.g., HR, IT Security, Internal Audit, risk) in decision-making ensures change integration is well managed.
  3. It has a robust delivery framework – Integrating technologies, toolkits, and tactics into IA program execution safeguards sustainability. Including relevant designers, IT professionals, and operations teams in testing makes sure solutions work. Socializing and managing life cycle compliance (e.g., intake, approvals, testing) ensures team interaction is clear. Program management, repository, and workflow tools makes oversight effective.
  4. It incorporates a proactive risk management structure – Making IT risk and security control oversight a part of IA development ensures solutions are sound. Like any technology integration, change control is critical to implementation success. An IT security risk and control framework provides this support. Risk mitigation strategies linking security reviews to IA validation ensures business goals and technology risks are appropriately considered.

RPA

Click here to access KPMG’s detailed RPA report