Which product is the most relevant to offer your customer right now?

This is a central and repeating question within sales and marketing. Recommendation engines are often used to find the answer, building on historical data about the sales of thousands of products, and are known to perform well in retail settings. But what do you do if you don’t have that many products? And shouldn’t you be able to take advantage of customer data beyond sales, such as the customer’s dialogue with your business, product usage, payment information and demography?

In this article we describe an alternative approach for Next Best Offer (NBO), tailored to businesses with a much smaller number of products, such as banking, insurance and telecom. The core idea consists of mass produced and calibrated prediction models, calculating the probability of purchase for each individual product per customer. The method is based on projects delivered at several large Norwegian companies.

Digital sales require knowing the Next Best Offer

In the pre-digital era, sales personnel normally served a local group of customers tied to the geographic location of the store. This allowed for tailored product offerings based on an intimate understanding of the shop’s typical customer. For businesses such as banks and insurance companies, sales representatives may also have had a long-term relationship with the customer, enabling even more customized advice. This close relationship has been challenged as the digital era has brought centralization of sales personnel and “one size fits all” approaches. Despite this trend, customers still expect a high service level and to be recommended relevant offerings.

One important tool to resolve the tension between the customer’s high expectations and a centralized digital sales organization is recommendation engines. These analytical models estimate which product or service has the highest chance of being relevant for the customer — the Next Best Offer (NBO). Everyone who has shopped online or has used streaming content providers such as Netflix are familiar with product offerings at the end of the check-out, suggesting that “people who bought this also bought that” or “since you watched this movie you might also like that movie”, which come from such models.

In addition, there are often a limited number of interactions with a customer, and these interactions are usually when the customer is mentally in purchasing mode. In many businesses, rule-based recommendations are already in place to take advantage of these interactions — it can be as simple as offering credit cards to all customers who don’t already have one every time he/she logs into the online/mobile bank, or offering car insurance through a partner when selling a car. Moving beyond simple business rules to analytical NBO models can both increase the probability of using these interactions to up- or cross-sell, as well as reduce the chance that the customer feels that he or she is exposed to overly aggressive marketing with irrelevant products.

Analytical models for few products, but lots of data

The most frequently used analytical approach, so-called “collaborative filtering” methods, takes advantage of the combination of both large number of products and customers. They predict which products a customer will be the most interested in based on the preferences of other customers who bought/liked similar products, and/or the customer’s previous content preferences (e.g. product types). This type of model only requires data on which customers bought which products and are relatively simple to train and operationalize. They form the basis for the recommendation engines used by for example Amazon. However, they require a very large amount of data on customer/product purchases to produce good results (think millions of customer interactions and products), and do not utilize contextual data about the customer (demographics, how the products are used, payment history, contact with the customer center etc.).

In contrast, many industries bear little resemblance to the mass-retail cases where collaborative filtering shines. Examples are banking, insurance and telecom, which usually have a product portfolio with somewhere between ten and hundred offerings, but rich datasets about their customers. These require a different analytical approach to predicting the next best offer. A more modern and technically demanding approach is “reinforcement learning”, which tests different offers and learn from the response (interaction/purchase), optimizing what is displayed individually to each customer. In contrast to collaborative filtering methods, this approach can quickly make predictions for new customers and products by learning on the fly, as well as taking advantage of contextual data about the customer. However, it is substantially more difficult to train and operationalize.

Instead of these two approaches, we suggest an intermediate method balancing prediction accuracy and complexity, which we call the “Prediction model factory”. Here we use supervised classification models and model score calibration techniques to predict the probability of purchase for each individual product per customer. The method is based on projects delivered at several Norwegian companies, taking advantage of already existing infrastructure for developing prediction models for single products.

The prediction model factory

The figure below illustrates the general supervised learning process, where machine learning algorithms predict the probability of a customer’s action by learning patterns in historical data consisting of explanatory variables and known outcomes. In a nutshell, with the “prediction model factory” approach, you train an individual prediction model for each of the products and services you want to recommend to your customers. The resulting model scores are then calibrated across products to give comparable probability scores in order to rank the products or services for each customer (for the technically inclined reader, see here for example). To get a first implementation up and running, each model uses the same classification algorithm (e.g. gradient boosted trees) trained on a standardized dataset with explanatory variables (such as customer demographics, interaction with products and customer service, product purchases etc.), differing only in the model target variable representing each different product. This allows for a simple mass production of individual models, hence the name “prediction model factory”.

One can later improve each individual model by creating special explanatory variables and fine-tuning parameters. In addition to product recommendations on an individual level, it is also natural to use the insight from these models to identify the target customer group for isolated marketing activities, such as sending an e-mail to customers with a high probability for purchasing car insurance or signing up for a credit card. If your business has a data science team, it is highly likely that some such models are already implemented and operationalized, which also provides a great starting point for building a model factory.

Once the models have been deployed and the prediction model factory is up and running, each customer will have a probability for purchasing/using each product. However, this probability alone is often not enough to make a sound business decision on the next best product/service to offer. There might be a need to weigh in other factors, such as customer or product profitability, or even strategic aspects, e.g. ambitions to increase market share for specific products. Furthermore, there might be other reasons why certain products should not be recommended to a specific customer at a certain time, for example if the customer has recently received an offer for a similar product. This process is shown schematically in the figure below and is a key part of the technical implementation.

Five critical success factors for implementing an NBO system

In general, an NBO system is not a stand-alone product that can be bought off the shelf, but a solution that should be tailored to specific business needs. This holds true for the prediction model factory as well, and below we outline the most important preparation steps for a successful implementation.

Clear business case

In addition to the investment of time and money, development of NBO systems also require new business processes and adjusted or new activities for sales and marketing personnel. To gain organizational buy-in, the NBO system should be supported by a business case which considers estimated sales increase versus cost of development and maintenance of models as well as potential increased sales channel costs. Creating realistic estimates for increased sales is the most challenging part of this process. However, if one has previously operationalized a prediction model, this might work as good proxy for estimates. In that case, one should consider both the models’ ability to predict customer behavior as well as sales personnel’s ability to use the insight actively in their communication. Further, if there are any existing next best offer solutions (e.g. based on simple rules) these are also excellent starting points. Should none of these be in place, it is also possible to build parts of the business case on experience with similar initiatives outside of your company.

Data preparation and quality assessment

The most important ingredient in any analytical model is the data it is based upon. Therefore, consider whether you have customer data of sufficient history and quality structured and easily accessible for analysis. What contextual data do you have available about your customers which may influence purchase behavior, e.g. demography, dialogue with your company, or product usage? Having efficient ways to source, integrate, quality assure and structure these data for analysis is essential for success in implementing any analytical model, including NBO. In a less data-mature organization, the key is to start small and not take every complexity into account from the beginning, but rather aim for a “good enough” data set as a starting point. From there, gradually implement good data practices to expand. Last but not least, whether you are legally allowed to use the relevant data for the modelling, and if it is ethical do to so, must of course also be considered.

Understanding communication channels and processes

Looking beyond the analytical model, one should review which customer interactions can be optimized with an NBO system and whether processes for customer interaction should change. Are new channels going to be used? For which message and in which situation? Do you need a different strategy for outbound (e.g. email and outgoing phone-calls) and inbound (e.g. website banner, offer when the customer calls the business) channels? For example, a customer service representative might receive a recommendation on the screen via a CRM system when a customer calls customer service, enabling a natural situation where one can suggest the product or service with the highest score. This might be a great way to utilize the NBO model, but it might also mean that new business processes must be developed, involving staff training and change management.

Preparing for operationalization and system integrations

Having established a business case, prepared accessible data, and considered which communication channels can take advantage of the NBO system, the next step is to draw up the technical architecture for operationalizing the model and integrating with consuming systems. Typical things to consider are:

  • Do you have access to an environment where prediction models can be developed, tested, deployed and retrained, e.g. an application server or a cloud environment?
  • Which systems consume the prediction results (e.g. CRM system or a website)? How are these systems integrated with your analysis environment? Do you need a new system to deliver the prediction results to end users?

You should also consider whether your company has staff with the required technical competency to implement the project. If you already have a data science team, how are they collaborating with sales and marketing to ensure that their analytical models create value in practice?

Measuring model accuracy and activity impact

Lastly, when the NBO model is up and running, we need to monitor both:

  • That the prediction models are sufficiently accurate (how well do we predict what product each customer is most likely to buy next, i.e. how good is the model at selecting for interested customers).
  • That the activity itself has an impact on customer behavior (are we able to influence customers to make a purchase they otherwise wouldn’t have at that time, and do we have a greater influence on customers with a high score from the model?).

Together these make up the increased sales from the NBO system, and it is important to measure these effects separately to make the correct strategic decisions and secure future buy-in for other analytics initiatives. In order to do so, you need to consider the infrastructure used to collect data from the systems consuming the prediction results, and where the logic for the effect measurements should be placed.

The principles for measuring model accuracy and activity impact are general for any prediction/action system, and in a future article we will go deeper into detail and discuss practical considerations and challenges in implementing this successfully.

Summary

As an increasing amount of sales will happen digitally, the importance of NBO systems will only increase in the years to come, and successful implementations of such systems are already the competitive advantage of some of the world’s leading companies. In this article we have outlined how businesses with lots of customer data but relatively few products can get started building their own NBO system, and if you think your own business can utilize our approach we encourage you to contact us.

About the authors

Tallak Bergheim is a Manager at BearingPoint Oslo specializing in Data & Analytics. His expertise lies in utilizing Data & Analytics for the financial and utilities industries. Before joining BearingPoint, Tallak obtained his MSc. in Industrial Economics and Technology Management from the Norwegian University of Science and Technology in 2016. Email address: tallak.bergheim@bearingpoint.com.

Vebjørn Axelsen is a Partner working at BearingPoint Oslo and head of the Advanced Analytics area. He has significant experience as strategic advisor, architect and developer across data science and data engineering and has led a multitude of data science efforts across industries. Vebjørn obtained his MSc. in Computer Science specialized in AI from the Norwegian University of Science and Technology in 2007. Email address: vebjorn.axelsen@bearingpoint.com.

Dr. Lars Bjålie is a Technology Architect at BearingPoint Oslo. His practical data science experience lies within customer analytics and predictive maintenance. Before joining BearingPoint in 2016, Lars obtained a PhD in computational materials science from the University of California, Santa Barbara.

Join in the conversation on Medium.