A non-negligible percentage of customers who buy a new smartphone return it within the “free return” window. Many of these returners claim that the phone does not work correctly. However, the data clearly indicates that this is often not the real issue. The reality is that these customers simply don’t know how to use the smartphone well enough, and either do not realize it, or are not willing to admit it. So they return it — which makes a major profit difference for both the smartphone manufacturer and the service provider. For the latter, it could be on the order of thousands of dollars in lifetime value per customer (CLV).
Insight Center
-
Data-Driven Marketing
Sponsored by Google
The science of storytelling and brand performance.
Being able to foresee a return, and develop an intervention that can preempt it, can prevent these kinds of problems from occuring. This might be an “expert” (like a BestBuy Geek Squad member or an Apple Genius) calling the customer a few days after a purchase, asking about the new-smartphone experience, operation, and performance, and letting it be known that help is available 24/7 for any questions or assistance. This proactive engagement with the customer is substantively different from the salesperson letting the customer know at the time of purchase that help is available. Of course, it is not profitable to arrange this intervention with all new-smartphone-buying customers. Thus, a key issue is with whom to implement an intervention.
Analytics to the rescue.
We helped a client dealing with exactly this problem using a predictive model to rank-order each customer over a given period of time from high to low probability that a member is a “returner” (without an intervention). In a real-world test, the model worked very well. The people the model ranked in the top 10% in terms of likelihood to return a phone contained about 40% of all actual returners on the list. In other words, the group that the model thought was highest risk included a relatively large percent of the high-risk people.
It’s easy see how it might be cost effective to intervene with these 10%, because if future results mirror the results based on the past data, it will potentially impact 40% of the potential returners, not just 10% of the potential returners, even though we are paying for an intervention with only 10% of the new-smartphone-buying customers. Exactly what (top) percentage of the rank-ordered list should receive the intervention, to be most profitable, can be easily determined by the economics of the situation; it would simply be a tradeoff among the cost of providing the intervention, the benefit of converting a returner into a non-returner if the intervention is implemented, and how well the predictive model can, indeed, predict who would be more likely to be a returner. The optimal percentage of customers with whom to intervene may be, for example, the top 8%, or the top 13%, or even the top 30%. (In many cases of which we are aware, interventions more than pay for themselves.)
Briefly consider another “intervention” example. A healthcare provider considers an intervention for members who, without an intervention, are reasonably likely to have a hospital stay in the near future. (Smartphone returns are certainly different from hospital stays; however, the intervention-related analytical process involved is extremely similar.) To cut down on unnecessary hospital stays, the intervention program can take on many forms, such as a weekly phone call to remind members to take their medications. In this real-world illustration, “stayers” are members who will need to be hospitalized for at least 15 days within a one-year time period. We used a predictive model to rank-order each member from high to low by the probability that a member is a stayer (without an intervention.) The top 10% of the rank-ordered list captured a very high value of 53% of all actual stayers. Of course, here also, one needs to weigh the cost of the interventions versus the cost-savings in reduced hospital stays that result from the intervention, to determine with whom to intervene.
There is a side to the overall intervention-decision-process that does not attract as much analytical attention: the issue of optimizing which intervention works best — i.e., which intervention is most cost effective. The key reason that this side of things is not often implemented is simple — lack of data. One would have to try different interventions on different populations and record the results of those different intervention strategies over a potentially long period of time. Usually, different intervention strategies would have to be implemented on mutually exclusive groups of people. Companies do not want to invest several years of paying for different interventions, knowing that many or most may not be sufficiently profitable. However, the same attitude was put forth “way back when,” when data to determine customer lifetime value (CLV) was being discussed at many companies. The data were not available at most companies and the prospect of collecting such data for many years was viewed as too daunting. However, nowadays, nearly all large- and mid-sized companies have sufficient data to develop a CLV model. One might make the case that when it comes to data for intervention decisions, short-sightedness is alive and well. However, advances in machine learning may help change this perspective when it comes to selecting an optimal intervention.
There are many marketing and business problems where customers take undesirable actions that can negatively impact organizations and providers. Fortunately, marketers can effectively disrupt these actions with an intervention. Optimizing, at least for a given intervention, is now being recognized as a useful implementation in many situations. We wouldn’t be surprised if the term “intervention analytics” becomes a well-known phrase, given the mushrooming of the field of marketing analytics, and the potential upside for companies.
from HBR.org https://ift.tt/2JgKYVb