Application of machine learning in marketing An Academic presentation by Dr. Nancy Agnes, Head, Technical Operations, Statswork Group www.statswork.com Email: info@statswork.com
TODAY'S DISCUSSION Outline INTRODUCTION APPROACHES TYPES OF MODELS SUPPORT VECTOR MACHINES
Introduction The study of methods or algorithms meant to understand the underlying patterns in data and generate predictions based on these patterns is known as machine learning (ML). In marketing, academic research has typically focused on causal inference. The requirement to generate counterfactual predictions drives the focus on causation. Will rising advertising spending, for example, improve demand? To answer this question, you'll need an unbiased assessment of the influence of advertising on demand.
Marketing techniques, on the other hand, rely on the ability to make correct forecasts. For example, which customers to target, which product configurations a customer is most likely to pick, which form of a banner ad will produce the most clicks, and what rivals' market shares and actions are likely to be. All of these are issues of prediction. These issues do not necessitate causality; instead, great out-of-sample prediction accuracy is required. Machine learning technology can help with these difficulties.
Approaches Both in terms of their emphasis and the features they supply, machine learning approaches differ from econometric methods. To begin, machine learning approaches are concerned with achieving the best out-ofsample predictions, whereas causal econometric methods are concerned with producing the best-unbiased estimators. As a result, methods designed for causal inference frequently fail to perform effectively when making out-of-sample predictions. The best-unbiased estimator does not always offer the most incredible out-of-sample prediction, as we shall explain below, and in some instances, a biassed estimator performs better for out-of-sample data.
Second, machine learning techniques are built to function in circumstances where we don't have an a priori understanding of how the data's results were created. This feature of ML differs from econometric techniques, which are used to evaluate a specific causal hypothesis. Third, unlike many empirical marketing strategies, machine learning algorithms can handle many variables and determine which ones should be kept and which should be eliminated. Finally, with ML approaches, scalability is a significant concern and strategies like feature selection and efficient optimization aid in achieving scale and efficiency. Because many of these algorithms must operate in real-time, scalability is becoming increasingly crucial for marketers.
Types of models Second, machine learning techniques are built to function in circumstances where we don't have an a priori understanding of how the data's results were created. This feature of ML differs from econometric techniques, which are used to evaluate a specific causal hypothesis. Third, unlike many empirical marketing strategies, machine learning algorithms can handle many variables and determine which ones should be kept and which should be eliminated. Finally, with ML approaches, scalability is a significant concern and strategies like feature selection and efficient optimization aid in achieving scale and efficiency.
Because many of these algorithms must operate in real-time, scalability is becoming increasingly crucial for marketers.
Support Vector Machines Support Vector Machines is a prominent classification algorithm in the last 20 years, with applications in image recognition, text mining, and illness detection, thanks to its stability and capacity to handle massive, highdimensional data. Cui and Curry (2005) brought it to marketing and introduced SVM theory and applications. They also compare SVM's predictive performance to that of the multinomial logit model on simulated choice data, demonstrating that SVM outperforms the multinomial logit model, especially when data is noisy and products have a large number of characteristics (i.e., high dimensionality).
They also discovered that SVM beats the multinomial logit model by a substantial margin when predicting choices from more extensive choice sets. Although both techniques' predictive performance decreases as the options set grow, the reduction is significantly steeper for multinomial logit than for SVM because the first-choice prediction task becomes more complex. The latent-class SVM model, which permits the inclusion of latent variables inside SVM, is another extension. By elaborating on the convex–concave technique used to estimate latent-class SVM while incorporating respondent heterogeneity, Liu and Dzyabura (2016) create an algorithm for predicting multi-taste consumer preferences. They demonstrate that their model's prediction outperforms single-taste benchmarks.
Contact Us
UNITED KINGDOM +44-1143520021
INDIA +91-4448137070
EMAIL info@statswork.com