- AI will impact employers before it impacts employment
- AI will come down to earth—and get to work
- AI will help answer the big question about data
- Functional specialists, not techies, will decide the AI talent race
- Cyberattacks will be more powerful because of AI—but so
- Opening AI’s black box will become a priority
- Nations will spar over AI
- Pressure for responsible AI won’t be on tech companies alone
1) AI will impact employers before it impacts employment
As signs grow this year that the great AI jobs disruption will be a false alarm, people are likely to more readily accept AI in the workplace and society. We may hear less about robots taking our jobs, and more about robots making our jobs (and lives) easier. That in turn may lead to a faster uptake of AI than some organizations are expecting.
2) AI will come down to earth—and get to work
Leaders don’t need to adopt AI for AI’s sake. Instead, when they look for the best solution to a business need, AI will increasingly play a role. Does the organization want to automate billing, general accounting and budgeting, and many compliance functions? How about automating parts of procurement, logistics, and customer care? AI will likely be a part of the solution, whether or not users even perceive it.
3) AI will help answer the big question about data
Those enterprises that have already addressed data governance for one application will have a head start on the next initiative. They’ll be on their way to developing best practices for effectively leveraging their data resources and working across organizational boundaries. There’s no substitute for organizations getting their internal data ready to support AI and other innovations, but there is a supplement: Vendors are increasingly taking public sources of data, organizing it into data lakes, and preparing it for AI to use.
4) Functional specialists, not techies, will decide the AI talent race
Enterprises that intend to take full advantage of AI shouldn’t just bid for the most brilliant computer scientists. If they want to get AI up and running quickly, they should move to provide functional specialists with AI literacy. Larger organizations should prioritize by determining where AI is likely to disrupt operations first and start upskilling there.
5) Cyberattacks will be more powerful because of AI—but so will cyberdefense
In other parts of the enterprise, many organizations may choose to go slow on AI, but in cybersecurity there’s no holding back: Attackers will use AI, so defenders will have to use it too. If an organization’s IT department or cybersecurity provider isn’t already using AI, it has to start thinking immediately about AI’s short- and long-term security applications. Sample use cases include distributed denial of service (DDOS) pattern recognition, prioritization of log alerts for escalation and investigation, and risk-based authentication. Since even AI-wary organizations will have to use AI for cybersecurity, cyberdefense will be many enterprises’ first experience with AI. We see this fostering familiarity with AI and willingness to use it elsewhere. A further spur to AI acceptance will come from its hunger for data: The greater AI’s presence and access to data throughout an organization, the better it can defend against cyberthreats. Some organizations are already building out on-premise and cloud-based “threat lakes,” that will enable AI capabilities.
6) Opening AI’s black box will become a priority
We expect organizations to face growing pressure from end users and regulators to deploy AI that is explainable, transparent, and provable. That may require vendors to share some secrets. It may also require users of deep learning and other advanced AI to deploy new techniques that can explain previously incomprehensible AI. Most AI can be made explainable—but at a cost. As with any other process, if every step must be documented and explained, the process becomes slower and may be more expensive. But opening black boxes will reduce certain risks and help establish stakeholder trust.
7) Nations will spar over AI
If China starts to produce leading AI developments, the West may respond. Whether it’s a “Sputnik moment” or a more gradual realization that they’re losing their lead, policymakers may feel pressure to change regulations and provide funding for AI. More countries should issue AI strategies, with implications for companies. It wouldn’t surprise us to see Europe, which is already moving to protect individuals’ data through its General Data Protection Regulation (GDPR), issue policies to foster AI in the region.
8) Pressure for responsible AI won’t be on tech companies alone
As organizations face pressure to design, build, and deploy AI systems that deserve trust and inspire it, many will establish teams and processes to look for bias in data and models and closely monitor ways malicious actors could “trick” algorithms. Governance boards for AI may also be appropriate for many enterprises.