David Maister was angry. He had been surprised and annoyed to learn that his company had set up a new AI-based marketing system that was doing most of what he thought was his job as digital marketing manager at Global Consumer Brands: deciding what ads to place where, for which customer segments, and how much to spend. And when he found that the system was buying ads for audiences that didn’t fit the company’s customer profile, he stormed in to his boss’s office and yelled, “I don’t want men and women over 55 buying our product! It’s not our audience!” Maister demanded that the system vendor modify it to enable him to override its recommendations for how much to spend on each channel and for each audience target. The vendor scrambled to give him the controls he wanted. However, after being given the reins on budgeting and buying decisions, Maister saw his decisions were degrading results. For example, despite the company’s younger customer profile, men and women over 55 were buying gifts for their children, nieces, and grandchildren, making them, in fact, a very profitable audience.
Maister returned control to the system and results improved. Over the ensuing weeks, he began to understand what the system did well, and what he could do to help it. He learned to leave decisions about where to spend and whom to target to the system. He focused on introducing more strategic parameters, such as the aggressiveness of a campaign, or a limit on spending, and on testing different approaches to execution. The results continued to improve throughout 2017 as the system learned and got smarter, while Maister learned how to improve the brand’s strategy in response to the insights produced by the AI. Within the first three months of using the system in new channels, the brand saw a 75% increase in purchases from paid digital channels, a 77% increase in purchase value, a 76% increase in return on add spend, and a significant decrease in cost per acquisition.
The names in this story have been changed, but the moral is clear: If you give control over AI experiments to employees to keep them involved, and to allow them to see what the AI does well, you can leverage the best of both humans and machines.
Unfortunately, companies will be unable to take full advantage of the huge potential of AI if employees don’t trust AI tools enough to turn their work over to them and let the machine run. This problem of low AI adoption rates is increasing as businesses of all kinds are seeing successful applications of AI and realizing it can be applied to many data-intensive processes and tasks even as AI technology — once only available at large companies like Google, Amazon, Microsoft, and IBM — is now becoming less expensive and easier for smaller companies to access and operate, thanks to AI-as-a-Service.
Resistance to disruptive, technology-driven change is not unusual. Specifically, many people resist AI because of the hype surrounding it, its lack of transparency, their fear of losing control over their work, and the way it disrupts familiar work patterns.
Consider these cases where humans interfered with an AI initiative, and the reasons behind them:
Loss of control. A retailer implemented a website advertising optimization tool. The marketing team could upload a few different key banners or messages to the most prominent location on the website and, after gathering some experience, the system would decide which message produced the highest visitor engagement. It would then offer that up to future visitors. But the marketing team struggled with allowing the system to take control, and often intervened to show a message they preferred, undercutting the value of the tool.
Disruption of plans. The CEO of a global lending institution was quickly sold on the financial benefits and operational efficiencies of introducing an AI-enabled system to take over lending decisions. But the vice president of analytics saw the new system as a diversion from his plans for his analytics teams and the company’s technology investments. He scrambled to derail consideration of the new system. He described in detail what his analysts did, and concluded, “There’s no way this system is ever going to be able to produce the kinds of results they are claiming.”
Disruption of relationships. The head of e-commerce for a regional product group at a consumer products company stuck his neck out to get permission from global headquarters to run an experiment with an AI-enabled system on some of his product’s ad campaigns. Initial tests demonstrated unprecedented results. In 2017, sales improved 15% due to the campaigns. But adoption beyond the regional group and the one product line stalled due to the resistance of people with long-standing, friendly relationships with the agencies that ran the company’s ad campaigns, who would lose work to the machine.
So, what can companies do to help employees become more comfortable working with AI systems?
Being able to visualize the way an AI-enabled system arrives at its decisions helps develop trust in the system — opening the black box so people can see inside. For example, Albert, a provider of an AI-based tool that helps marketers make better advertising investment decisions and improves campaign performance, developed a visualization tool (“Inside Albert”) for its users to see where and when their brand is performing best, what ad concepts are converting the most customers, who the ideal customer is in terms of gender, location, and social characteristics, and the total number of micro audience segments the system has created (often in the tens of thousands). Clients realized that they couldn’t micromanage one set of variables, such as ad frequency, because the system was wading through and factoring in a vast number of variables to decide pace and timing. Though users initially felt like the system was not aware of what they believed to be their best performing days and frequency, they learned that the system was finding high conversions operating outside of their previously established assumptions. “Inside Albert” let marketers better understand how the system was making decisions, so they ultimately didn’t feel the need to micromanage it.
To overcome the resistance of stakeholders who may not be willing to engage with the new system, such as the VP of analytics at the lending institution, another approach is to build political momentum for a new AI-enabled system by mobilizing stakeholders who benefit from its adoption. For example, Waymo has partnered with Mothers Against Drunk Driving, The National Safety Council, The Foundation for Blind Children, and the Foundation for Senior Living to rally these constituencies in support of self-driving cars.
As AI is increasingly deployed throughout your company’s decision-making processes, the goal should be to transition as quickly as possible. As the examples of Albert and Waymo illustrate, you can overcome AI resistance by running experiments, creating a way to visualize the decision process of the AI, and engaging constituencies who would benefit from the technology. The sooner you get people on board, the sooner your company will be able to see the potential results that AI can produce.
from HBR.org http://ift.tt/2nbzUeW