Software doesn’t always end up being the productivity panacea that it promises to be. As its victims know all too well, “death by PowerPoint,” the poor use of the presentation software, sucks the life and energy out of far too many meetings. And audit after enterprise audit reveals spreadsheets rife with errors and macro miscalculations. Email and chat facilitate similar dysfunction; inbox overload demonstrably hurts managerial performance and morale. No surprises here — this is sadly a global reality that we’re all too familiar with.
So what makes artificial intelligence/machine learning (AI/ML) champions confident that their technologies will be immune to comparably counterproductive outcomes? They shouldn’t be so sure. Digital empowerment all too frequently leads to organizational mismanagement and abuse. The enterprise history of personal productivity tools offers plenty of unhappy litanies of unintended consequences. For too many managers, the technology’s costs often rival its benefits.
It’s precisely because machine learning and artificial intelligence platforms are supposed to be “smart” that they pose uniquely challenging organizational risks. They are likelier to inspire false and/or misplaced confidence in their findings; to amplify or further entrench data-based biases; and to reinforce — or even exacerbate — the very human flaws of the people who deploy them.
Insight Center
-
The Risks and Rewards of AI
Sponsored by SAS
Assessing the opportunities and the potential pitfalls.
The problem is not that these innovative technologies don’t work; it’s that users will inadvertently make choices and take chances that undermine colleagues and customers. Ostensibly smarter software could perversely convert yesterday’s “death by Powerpoint” into tomorrow’s “murder by machine learning.” Nobody wants to produce boring presentations that waste everybody’s time, but they do; nobody wants to train machine learning algorithms that produce misleading predictions, but they will. The intelligent networks to counter-productivity hell are wired with good intentions.
For example, as Gideon Mann and Cathy O’Neil astutely observe in “Hiring Algorithms Are Not Neutral,” their HBR article, “Man-made algorithms are fallible and may inadvertently reinforce discrimination in hiring practices. Any HR manager using such a system needs to be aware of its limitations and have a plan for dealing with them…. Algorithms are, in part, our opinions embedded in code. They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations.”
These intrinsic biases — in data sets and algorithms alike — can be found wherever important data-driven decisions need to be made, such as customer segmentation efforts, product feature designs, and project risk assessments. There may even be biases in detecting biases. In other words, there’s no escaping the reality that machine learning’s computational strengths inherently coexist with human beings’ cognitive weaknesses, and vice versa. But that’s more a leadership challenge than a technical issue. The harder question is: Who’s going to “own” this digital coevolution of talent and technology, and sustainably steer it to success?
To answer this question, consider the two modes of AI/ML that are most likely to dominate enterprise initiatives:
- Active AI/ML means people directly determine the role of artificial intelligence or machine learning to get the job done. The humans are in charge; they tell the machines what to do. People rule.
- Passive AI/ML, by contrast, means the algorithms largely determine people’s parameters and processes for getting the job done. The software is in charge; the machines tell the humans what to do. Machines rule.
Crudely put, where active machine learning has people training machines, passive machine learning has machines training people. With the rise of big data and the surge of smarter software, this duality will become one of the greatest strategic opportunities — and risks — confronting leadership worldwide.
Active AI/ML systems have the potential to digitally reincarnate, and proliferate, the productivity pathologies associated with existing presentation, spreadsheet, and communications software. Individuals with relatively limited training and knowledge of their tools are being told to use them to get their jobs done. But most companies have very few reliable review mechanisms to assure or improve quality. So, despite the advanced technology, presentations continue to waste time, spreadsheet reconciliations consume weekends, and executives fall further behind responding to emails and chats.
Just as these tools turned knowledge workers into amateur presenters and financial analysts, the ongoing democratization of machine learning invites them to become amateur data scientists. But as data and smarter algorithms proliferate enterprise-wide, how sustainable will that be?
To be sure, talented power users will emerge, but overall, the inefficiencies, missed opportunities, and mistakes that could result have the potential to be organizationally staggering. To think that most managers will reap real value from AI/ML platforms with minimal training is to believe that most adults could, in their spare time, successfully turn litters of puppies into show dogs. This is delusional. Most likely, organizations will raise ill-trained software that demands inordinate amounts of attention, leaves unexpected messes, and occasionally bites.
For example, overfitting is a common machine learning mistake made by even experienced data scientists. In the case of overfitting, the AI is, literally, too precise to be true; the model incorporates too much noise, rather than focusing on the essential data. It fits too well with existing data sets and in turn becomes wildly inaccurate and/or unreliable when processing new data. For businesses, the predicted results could therefore be complete nonsense, leading to negative outcomes such as bad hires, poor designs, or missed sales forecasts. Overfitting, like spreadsheet errors, can of course be caught and corrected. But what happens when dozens of machine learning amateurs are making flawed investments or projections based on what they thought were accurate models? That’s an algorithm for disaster.
The more data resources that organizations possess, the more disciplined supervision and oversight that active AI/ML will need. Smarter algorithms require smarter risk management.
Passive AI/ML, on the other hand, presents a different design sensibility and poses different risks. For all intents and purposes, this software acts as manager and coach, setting goals and guidelines even as it offers data-driven advice to get the job done. The personal productivity promise is compelling: texts and emails that write their own responses; daily schedules that reprioritize themselves when you’re running late; analytics that highlight their own most important findings; and presentations that make themselves more animated. Enterprise software innovators from Microsoft to Google to Salesforce to Slack seek to smarten their software with algorithms that reliably learn from users. So, what’s the problem?
The most obvious risk, of course, is whether the “smarter software” truly gives its people the right commands. But top management should have that firmly under review. The subtler and more subversive risk is that passive AI/ML is too rooted in human compliance, adherence, and obedience. That is, workers are required to be subservient to the AI to make it succeed. This sort of disempowerment-by-design may invite employee resistance, perfunctory compliance, and subtle sabotage. For example, a customer service rep might tell an unhappy customer, “I’d love to help you, but the software forbids me from giving you any kind of refund.”
In other words, the value of the human touch is deliberately discounted by data-driven decisions. Workers are expected to subordinate their judgment to their algorithmic bosses, and the system will discipline them if they get out of line.
While there’s no solution to the enumerated challenges, there are approaches that strike a healthy balance between the risks and opportunities. Certainly, the more successful organizations will embrace “data governance” and hire the best data scientists they can. But culturally and operationally, they’ll need to publicly enact three interrelated initiatives to mitigate risks:
1. Write a declaration of (machine) intelligence. Not unlike Thomas Paine’s Common Sense or the Declaration of Independence, a Declaration of (Machine) Intelligence would define and articulate principles related to how the organization expects to use smart algorithms to drive performance and productivity. The document typically describes use cases and scenarios to illustrate its points. It aims to give managers and workers a clearer sense of where AI/ML will augment their tasks and where it may replace or automate them. The declaration is very much about expectations management, and it should prove required reading for anyone in the company.
2. Employ radical repository transparency. Review, verification, and validation are essential principles in data-rich, AI/ML enterprise environments. Sharing ideas, data, and models between communities of practice should be a best practice. Big corporations increasingly use repositories that encourage people and teams to post their data sets and models for review. At times, these repositories grow out of data governance initiatives. At others, they’re byproducts of data science teams trying to get greater visibility into what various groups are doing digitally. The clear aspiration is to expand enterprise-wide awareness without constraining bottom-up initiative.
3. Create a trade-off road map. Data science, artificial intelligence, and machine learning are dynamically innovative fields that rapidly and opportunistically evolve. Yesterday’s active machine learning implementation may become tomorrow’s passive AI/ML business process. As legacy organizations look to data, machine learning, and digital platforms to transform themselves, their road maps will suggest where management believes active AI/ML investments will be more valuable than passive ones. For example, customer-oriented AI/ML systems may merit different talent and trade-offs that focus on internal process efficiency.
Churn management makes an excellent case study: At one telecom giant, an analytics team explored using machine learning techniques to identify the customers most likely to leave and switch to another service provider. Successfully testing retention offers would be a big win for the enterprise, and having ML reduce customer churn would dramatically improve internal process efficiencies. But several of the more customer-centric analysts believed that simply keeping a customer wasn’t enough; they thought a portion of possible churners could be upsold to new and additional services if the offers were framed correctly. They wanted the data and machine learning algorithms to be trained to identify customers who could be upsold, not just saved. It turned out this was a very good data-driven, customer-centric idea.
Like the Declaration of (Machine) Intelligence, the road map of trade-offs is meant to manage expectations. But it looks to and draws on radical repository transparency to see what internal AI/ML capabilities exist and what new ones need to be cultivated or acquired.
Simply put, leaders who are serious about leading AI/ML transformations are investing not just in innovative technical expertise but also in new organizational capabilities. As they do so, they’ll need to take great care not to recreate the productivity mistakes of the past.
from HBR.org http://ift.tt/2rt7gvu