Cybersecurity Risk Management Oversight – A Tool for Board Members

Companies are facing not only increasing cyber threats but also new laws and regulations for managing and reporting on data security and cybersecurity risks.

Boards of directors face an enormous challenge: to oversee how their companies manage cybersecurity risk. As boards tackle this oversight challenge, they have a valuable resource in Certified Public Accountants (CPAs) and in the public company auditing profession.

CPAs bring to bear core values—including independence, objectivity, and skepticism—as well as deep expertise in providing independent assurance services in both the financial statement audit and a variety of other subject matters. CPA firms have played a role in assisting companies with information security for decades. In fact, four of the leading 13 information security and cybersecurity consultants are public accounting firms.

This tool provides questions board members charged with cybersecurity risk oversight can use as they engage in discussions about cybersecurity risks and disclosures with management and CPA firms.

The questions are grouped under four key areas:

  1. Understanding how the financial statement auditor considers cybersecurity risk
  2. Understanding the role of management and responsibilities of the financial statement auditor related to cybersecurity disclosures
  3. Understanding management’s approach to cybersecurity risk management
  4. Understanding how CPA firms can assist boards of directors in their oversight of cybersecurity risk management

This publication is not meant to provide an all-inclusive list of questions or to be seen as a checklist; rather, it provides examples of the types of questions board members may ask of management and the financial statement auditor. The dialogue that these questions spark can help clarify the financial statement auditor’s responsibility for cybersecurity risk considerations in the context of the financial statement audit and, if applicable, the audit of internal control over financial reporting (ICFR). This dialogue can be a way to help board members develop their understanding of how the company is managing its cybersecurity risks.

Additionally, this tool may help board members with cybersecurity risk oversight learn more about other incremental offerings from CPA firms. One example is the cybersecurity risk management reporting framework developed by the American Institute of CPAs (AICPA). The framework enables CPAs to examine and report on management-prepared cybersecurity information, thereby boosting the confidence that stakeholders place on a company’s initiatives.

With this voluntary, market-driven framework, companies can also communicate pertinent information regarding their cybersecurity risk management efforts and educate stakeholders about the systems, processes, and controls that are in place to detect, prevent, and respond to breaches.

AICPA

Click here to access CAQ’s detailed White Paper and Questionnaires

How to Protect and Engage Customers

Think about the many devices and channels your customers use today and the barrage of marketing messages coming across them. It’s overwhelming. How do you break through to meaningfully engage with customers, keep them loyal, and increase incremental revenue?

Finding ways to stand out from entrenched competitors and innovative upstarts is becoming increasingly difficult. Traditional offerings and marketing continue to decline. At the same time, your customers and employees face a host of evolving and confusing cyber threats that can quickly derail their lives. That, no doubt, partially explains why 79 percent of consumers prefer to do business with companies that provide identity monitoring services, according to a GfK Survey.

Yet the complexity of threats requires more than monitoring. Additionally, most identity and data protection service offerings haven’t kept up with the times and consumers’ expectations about self-service. At this intersection of evolving threats and customer needs lies a rare opportunity for you to establish a new type of valuable and ongoing engagement with customers.

In this article, we’ll explore this new opportunity for protecting and engaging your customers, examining:

  • Technology’s impact on customer interactions and loyalty
  • The tight correlation between security engagement and risk
  • Why it’s time for a new identity and data defense solution model
  • How a marketplace approach to identity management, privacy and cyber security can help you regularly engage customers, improve loyalty and grow revenues

Technology’s impact on customer interactions and loyalty

Today, most engagement is technologydriven, and customers expect nearly instantaneous responses for any type of query or request.

Engagement1

The tight correlation between security engagement and risk

It’s not just technology that has been evolving rapidly over the years. We’ve also seen a corresponding progression in the sophistication and types of identity and data fraud.

Engagement2

Why it’s time for a new identity and data defense solution model

We recognized the growing potential of cyber and identity protection services as a unique opportunity for ongoing necessary engagement. That’s why we took a step back and reconsidered everything from the changing threat landscape to changing customer preferences and began working on an innovative approach for organizations to engage customers.

Engagement3

Click here to access Cyberscout’s White Paper

 

2018 AI predictions – 8 insights to shape your business strategy

  1. AI will impact employers before it impacts employment
  2. AI will come down to earth—and get to work
  3. AI will help answer the big question about data
  4. Functional specialists, not techies, will decide the AI talent race
  5. Cyberattacks will be more powerful because of AI—but so
    will cyberdefense
  6. Opening AI’s black box will become a priority
  7. Nations will spar over AI
  8. Pressure for responsible AI won’t be on tech companies alone

Key implications

1) AI will impact employers before it impacts employment

As signs grow this year that the great AI jobs disruption will be a false alarm, people are likely to more readily accept AI in the workplace and society. We may hear less about robots taking our jobs, and more about robots making our jobs (and lives) easier. That in turn may lead to a faster uptake of AI than some organizations are expecting.

2) AI will come down to earth—and get to work

Leaders don’t need to adopt AI for AI’s sake. Instead, when they look for the best solution to a business need, AI will increasingly play a role. Does the organization want to automate billing, general accounting and budgeting, and many compliance functions? How about automating parts of procurement, logistics, and customer care? AI will likely be a part of the solution, whether or not users even perceive it.

3) AI will help answer the big question about data

Those enterprises that have already addressed data governance for one application will have a head start on the next initiative. They’ll be on their way to developing best practices for effectively leveraging their data resources and working across organizational boundaries. There’s no substitute for organizations getting their internal data ready to support AI and other innovations, but there is a supplement: Vendors are increasingly taking public sources of data, organizing it into data lakes, and preparing it for AI to use.

4) Functional specialists, not techies, will decide the AI talent race

Enterprises that intend to take full advantage of AI shouldn’t just bid for the most brilliant computer scientists. If they want to get AI up and running quickly, they should move to provide functional specialists with AI literacy. Larger organizations should prioritize by determining where AI is likely to disrupt operations first and start upskilling there.

5) Cyberattacks will be more powerful because of AI—but so will cyberdefense

In other parts of the enterprise, many organizations may choose to go slow on AI, but in cybersecurity there’s no holding back: Attackers will use AI, so defenders will have to use it too. If an organization’s IT department or cybersecurity provider isn’t already using AI, it has to start thinking immediately about AI’s short- and long-term security applications. Sample use cases include distributed denial of service (DDOS) pattern recognition, prioritization of log alerts for escalation and investigation, and risk-based authentication. Since even AI-wary organizations will have to use AI for cybersecurity, cyberdefense will be many enterprises’ first experience with AI. We see this fostering familiarity with AI and willingness to use it elsewhere. A further spur to AI acceptance will come from its hunger for data: The greater AI’s presence and access to data throughout an organization, the better it can defend against cyberthreats. Some organizations are already building out on-premise and cloud-based “threat lakes,” that will enable AI capabilities.

6) Opening AI’s black box will become a priority

We expect organizations to face growing pressure from end users and regulators to deploy AI that is explainable, transparent, and provable. That may require vendors to share some secrets. It may also require users of deep learning and other advanced AI to deploy new techniques that can explain previously incomprehensible AI. Most AI can be made explainable—but at a cost. As with any other process, if every step must be documented and explained, the process becomes slower and may be more expensive. But opening black boxes will reduce certain risks and help establish stakeholder trust.

7) Nations will spar over AI

If China starts to produce leading AI developments, the West may respond. Whether it’s a “Sputnik moment” or a more gradual realization that they’re losing their lead, policymakers may feel pressure to change regulations and provide funding for AI. More countries should issue AI strategies, with implications for companies. It wouldn’t surprise us to see Europe, which is already moving to protect individuals’ data through its General Data Protection Regulation (GDPR), issue policies to foster AI in the region.

8) Pressure for responsible AI won’t be on tech companies alone

As organizations face pressure to design, build, and deploy AI systems that deserve trust and inspire it, many will establish teams and processes to look for bias in data and models and closely monitor ways malicious actors could “trick” algorithms. Governance boards for AI may also be appropriate for many enterprises.

AI PWC

Click here to access PWC’s detailed predictions report