Customer Satisfaction
The customer experience through the customer’s eyes
Nigel Hill, Greg Roche and Rachel Allen
contents Sat
5/7/07
09:48
Page i
Customer Satisfaction THE CUSTOMER EXPERIENCE THROUGH THE CUSTOMER’S EYES
Nigel Hill, Greg Roche and Rachel Allen
Cogent
contents Sat
5/7/07
09:48
Page ii
Cogent Published by Cogent Publishing in 2007 Cogent Publishing Ltd 26 York Street London W1U 6PZ Tel: 0870 240 7885 Web: www.cogentpublishing.co.uk Email: info@cogentpublishing.co.uk Registered in England no 3980246
Copyright Š Nigel Hill, Greg Roche and Rachel Allen, 2007
All rights reserved. This book must not be circulated in any form of binding or cover other than that in which it is published and without similar condition of this being imposed on the subsequent purchaser. No part of this publication may be reproduced, stored on a retrieval system or transmitted in any form, or by any other means, electronic, mechanical, photocopying, recording or otherwise, without either prior permission in writing from the publisher or a licence permitting restricted copying. In the United Kingdom Licences are issues by the Copyright Licensing Agency, 90 Tottenham Court Road, London, W1P 9HE. The right of Nigel Hill, Greg Roche and Rachel Allen to be identified as authors of this work has been asserted in accordance with Copyright Designs and Patents Acts 1988.
A British Library Cataloguing in Publication record is available for this publication.
ISBN 978-0-9554161-1-8
Printed and bound in Great Britain by The Charlesworth Group, Wakefield, West Yorkshire.
Available from all good bookshops. In case of difficulty contact Cogent Publishing on
(+44) 870 2407885.
contents Sat
5/7/07
09:48
Page iii
About the authors
Nigel Hill Nigel founder of The Leadership Factor, a company that specialises in helping organisations to measure, monitor and improve their customers’ experience. With offices in the USA, Australia, Russia, Spain, Portugal and France as well as the UK, The Leadership Factor provides research services, advice and training worldwide. Nigel has written three previous books and many articles about customers and speaks at conferences and events around the world. He has helped organisations such as Manchester United FC, Chelsea FC, the BBC, ASDA, and Land Securities amongst many others.
Greg Roche Client Director at The Leadership Factor. Greg is one of the UK’s leading experts in helping organisations to use data from customer satisfaction surveys to improve their customer experience. He has worked with many different organisations across all sectors of the economy including Royal Bank of Scotland, Visa, Tarmac, Irish Life, Allied Irish Bank, Churchill, Privilege, Jurys Doyle Hotels, Sainsbury’s Convenience and The Bank of New York.
Rachel Allen Rachel is Client Manager at The Leadership Factor. She is an expert on customer satisfaction research and complaint handling. Rachel has written many articles and speaks widely on these subjects at conferences, seminars and other events. She works with many different organisations on surveys and complaint handling including Direct Line, Tesco, Royal Borough of Kensington and Chelsea, HBOS, Forensic Science Service and Royal Bank of Scotland International.
If you would like to contact any of the authors go to www.customersatisfactionbook.com and follow the contact instructions.
contents Sat
iv
5/7/07
09:48
Page iv
ACKNOWLEDGMENTS
Acknowledgements Many people have helped in the preparation of this book. Particular thanks to Robert Crawford, Director of the Institute of Customer Service for writing the Preface and for being a continual source of honest advice, stimulating views and professional support. Thanks also to the many clients and contacts from companies and organisations across all sectors of the economy who have helped to develop our ideas and understanding whilst grappling with their real work of improving customer satisfaction. Amongst these, very special thanks to those who reviewed this book, including Tim Oakes from the RBS, Mark Adams from Virgin Mobile, Scott Davidson from Tesco Personal Finance and Quintin Hunte from Fiat. All made many useful suggestions for amendments or additions. Needless to say any opinions, omissions or mistakes in the book are the responsibility of the authors. There is much more to publishing a book than writing the words. Ask Rob Ward who not only did the typesetting and produced the diagrams but also had to amend it all, many times, as the authors had second, third, fourth thoughts and more. Thanks also to Ruth Colleton who cross-checked every single reference on the internet and, along with Janet Hill, corrected the proofs. Thanks to Rob Ward and Rob Egan for the cover design and to Charlotte and Lucy at Cogent Publishing for organising the never ending list of tasks that turn a manuscript into a printed book that you can buy in shops or on the internet!
contents Sat
5/7/07
09:48
Page v
CONTENTS
Contents Acknowledgements Introduction CHAPTER ONE CHAPTER TWO CHAPTER THREE CHAPTER FOUR CHAPTER FIVE CHAPTER SIX CHAPTER SEVEN CHAPTER EIGHT CHAPTER NINE CHAPTER TEN CHAPTER ELEVEN CHAPTER TWELVE CHAPTER THIRTEEN CHAPTER FOURTEEN CHAPTER FIFTEEN CHAPTER SIXTEEN CHAPTER SEVENTEEN CHAPTER EIGHTEEN Glossary Index
iv vi DISPELLING THE MYTHS THE BENEFITS OF CUSTOMER SATISFACTION METHODOLOGY ESSENTIALS ASKING THE RIGHT QUESTIONS EXPLORATORY RESEARCH SAMPLING COLLECTING THE DATA KEEPING THE SCORE THE QUESTIONNAIRE BASIC ANALYSIS MONITORING PERFORMANCE OVER TIME ACTIONABLE OUTCOMES COMPARISONS WITH COMPETITORS ADVANCED ANALYSIS: UNDERSTANDING THE CAUSES AND CONSEQUENCES OF CUSTOMER SATISFACTION USING SURVEYS TO DRIVE IMPROVEMENT INVOLVING EMPLOYEES INVOLVING CUSTOMERS CONCLUSIONS
1 18 29 43 57 69 81 110 125 150 166 185 201 226 250 268 282 289 295 307
v
contents Sat
vi
5/7/07
09:48
Page vi
Introduction
Introduction This book is about building successful businesses through doing best what matters most to customers. In one volume we explain why this is so important, how it is achieved and how to measure and monitor the organisation’s success in doing so. Our ambition is to inspire you to take action, make your customers more satisfied and loyal and your company more successful. The book is organised in a clear report style format, familiar to most managers and designed to make it easy to read and navigate. All chapters are fully referenced for those wanting more detailed information. If you are still hungry for more knowledge, have unanswered questions or want to debate an issue, the book’s website, www.customersatisfactionbook.com is the place for you. You can use it to email the authors, find relevant customer satisfaction links, check out the blog or simply to keep up with the latest events and ideas in the customer satisfaction world. We look forward to hearing from you. Nigel Hill Greg Roche Rachel Allen August 2007
Chapter one
5/7/07
09:49
Page 1
Dispelling the Myths
CHAPTER ONE
Dispelling the Myths This book is based on the premise that organisations succeed by doing best what matters most to customers. Human beings seek pleasurable experiences and avoid painful ones, so tend to return to companies that meet or exceed their requirements whilst shunning organisations that fail to meet them. These self-evident truths are most easily described by the phrase ‘customer satisfaction and loyalty’. Customers whose needs are met or exceeded by an organisation form favourable attitudes about it. Since people’s attitudes drive their future behaviours, highly satisfied customers usually display loyal behaviours such as staying with the company longer, buying more and recommending it – all of which are highly profitable to the company concerned. This book is about how organisations can accurately monitor customers’ attitudes (satisfaction) in order to make decisions that will drive favourable customer behaviours (loyalty), thus making them more profitable – a concept that is simple as well as sensible. In recent years, however, there have been many attempts to complicate this process leading to confusion, doubt and many myths about organisations’ relationship with their customers; an unfortunate state of affairs that we intend to address in this first chapter.
At a glance In this chapter we will examine the 6 main myths about measuring customer satisfaction: a)Customer satisfaction is old hat. It’s all about wowing the customer. b)Only loyalty matters. c)Improving customer satisfaction and loyalty is difficult. d)Surveys don’t work. e)Consulting customers isn’t the only way of monitoring customer satisfaction. f)Surveys reduce customer satisfaction and loyalty.
1.1 Customer satisfaction is a limited concept This book is about how organisations succeed by putting customers at the top of their agenda. From the 1980s in America and by the 1990s in most other countries, customer satisfaction was rarely challenged as a key organisational goal. In more recent years however, a growing industry has developed around modifications or
1
Chapter one
2
5/7/07
09:49
Page 2
Dispelling the Myths
enhancements to the concept of customer satisfaction spawning a multitude of words and phrases to describe it. The list is endless, but amongst the most common are customer loyalty, the customer relationship, the customer experience, customer focus, customer delight, wowing the customer, the loyalty effect, customer retention, the advocacy ladder, emotional attachment, service quality, service recovery, zero defections, customer win-back and the list goes on. Needless to say, people get very passionate about defending their own little set of words, but they’re all just semantics. They’re just different words that describe the same phenomenon – the attitudes or feelings that customers form based on their experiences with an organisation. Satisfaction is a convenient generic word to summarise all these attitudes and feelings. We’re in favour of anything that makes things better for customers. We think it’s fantastic if organisations can delight their customers and even better if they can make customers feel some kind of emotional attachment to them. However, those feelings are no more than descriptors for the type of attitudes customers hold at the highest levels of satisfaction, just as disgust could describe extreme dissatisfaction and indifference the mid-range of the satisfaction spectrum. KEY POINT The word “satisfaction” is the most appropriate label for the range of attitudes and feelings that customers hold about their experiences with an organisation.
1.2 Only loyalty matters Whatever you call these customer attitudes, they are massively important to all organisations since they determine customers’ future behaviours. Collectively known as loyalty, it is the behaviours rather than the attitudes that really interest companies. The best concise description of what loyalty is and why it’s so important is provided by Harvard Business School. They call it the 3Rs1. FIGURE 1.1 The 3Rs of customer loyalty Retention Related sales Referrals
The 3Rs are customer behaviours – staying longer, choosing to use more of the products or services supplied by an organisation and recommending it to others. For example, Starbucks discovered that a ‘highly satisfied’ customer spent an average of £4.42 per visit and made an average of 7.2 visits per month. By contrast an ‘unsatisfied’ customer spent £3.88 and visited 3.9 times per month2. Over one year,
Chapter one
5/7/07
09:49
Page 3
Dispelling the Myths
that’s £381 compared with £181. See Chapter 14 for details on how Starbucks related these satisfaction attitudes and loyalty behaviours to the customer experience. There is conclusive evidence that loyalty behaviours such as these contribute hugely to corporate profitability. This is because a customer’s value to a business typically increases over time, (known as customer lifetime value). One-off, transient customers are typically a cost, whereas loyal, long-standing customers become highly profitable. The evidence for the profitability of loyal customers is fully explained and referenced in Chapter 2 of this book. Since these customer behaviours have such an obvious direct link with organisations’ financial performance it has prompted some commentators to question the value of customer satisfaction, using phrases like ‘the satisfaction trap’3. Some argue that since loyalty has a financial value, companies should focus all their efforts and resources on building customer loyalty4,5. Following the same logic, the fact that satisfaction per se has no financial value would suggest that monitoring it is a pointless waste of resources, customer loyalty being the ‘true measure’6. The fact that several companies including Xerox7, GM8 and Forum9 reported that satisfied customers do defect seemed to further devalue the whole concept of customer satisfaction, especially when Frederick Reichheld claimed in Harvard Business Review that 65% to 85% of customers that switched supplier were satisfied with their previous one10. This has prompted other authors to make claims such as “one thing is certain: current customer satisfaction measurement systems cannot be used as a reliable predictor of repeat purchase”6 or “it is impossible to accurately forecast customer retention rates from levels of customer satisfaction”11. In reality, most customer experts now recognise such views as superficial, simply displaying a very poor understanding of how the relationship between organisations and their customers actually works. As Johnson and Gustafsson12 of Michigan University point out, “to argue that quality or satisfaction or loyalty is what matters misses the point. These factors form a chain of cause and effect, building on each other so they cannot be treated separately. They represent a system that must be measured and managed as a whole if you want to maximize results.” There are 4 key reasons why effectively monitoring customer satisfaction provides essential management information for organisations to optimise the benefits of their relationship with customers: 1.2.1 Attitudes precede behaviours Whether we call them satisfaction, delight, emotional attachment or the latest conference buzzword, the attitudes customers hold about an organisation determine their future behaviour towards it. Measuring customer satisfaction is therefore the main lead indicator of future customer behaviours, which, in turn, will determine company profitability.
3
Chapter one
4
5/7/07
09:49
Page 4
Dispelling the Myths
FIGURE 1.2 Attitudes and behaviours Customer attitudes
Customer behaviour
Organisational outcomes
CSM customer satisfaction measurement is totally focused on the first oval in Figure 1.2 – measuring customers’ attitudes about how satisfied they feel with the organisation. As lead indicators, customers’ attitudes provide by far the most useful data for managing organisational performance. Of course, customers’ behaviours, especially their loyalty behaviours, are extremely important to companies, but they have already happened. By the time a customer has defected or chosen an alternative supplier for a related product or service, the opportunities have been missed. That is not to say that customer behaviours should not be monitored. Information such as customer defection rates, average spend and complaints are all extremely useful measures of organisational performance, (and will be covered in Section 1.5) but they reflect what has already happened in the past and do not tell you how to improve on that. Providing information on how to improve in the future is the main purpose of customer satisfaction measurement. KEY POINT Customer satisfaction is a lead indicator that predicts future customer behaviours. 1.2.2 How satisfaction affects loyalty Understanding the difference between customers’ attitudes and behaviours, and how the relationship between them works, is crucial for managers involved in any aspect of customer management. Whilst it is broadly true to say that satisfied customers will be more loyal than dissatisfied ones, so customer satisfaction must be important, that is almost as simplistic as concluding that customer satisfaction can’t be important because some satisfied customers defect. In the real world, there are different levels of customer satisfaction and these can affect companies in widely differing ways. In the 21st century, virtually all organisations perform sufficiently well to deliver a reasonable level of customer satisfaction; at least in markets where customers have choice and can switch suppliers with relative ease. Few perform badly enough to dissatisfy a significant proportion of their customer base. That may be progress compared with two or three decades ago, but customers’ expectations have also risen since then. In most markets suppliers need to do much more than not dissatisfy customers if they want to maximise the benefits of customer satisfaction. As Harvard point out, the zone of indifference just isn’t good enough1. Why would customers in the zone of indifference stay with a supplier other than
Chapter one
5/7/07
09:49
Page 5
Dispelling the Myths
through inertia? Why would they buy an additional product or service or recommend the business? They wouldn’t. These days most customers think they can do better than ‘OK’, ‘average’ or ‘good enough’. To keep customers, suppliers have to deliver such great results that rational people will conclude that it would be difficult to do better elsewhere. FIGURE 1.3 Satisfaction - Loyalty relationship Apostle
100% Zone of affection
Loyalty
80% Zone of indifference 60% 40% Zone of defection 20% Satisfaction Saboteur
1
5
10
KEY POINT Satisfaction is the main driver of loyalty, but ‘mere satisfaction’ is not enough. Customers have to be highly satisfied. According to Jones and Sasser, most organisations don’t understand the extent to which ‘very satisfied’ is more valuable than ‘satisfied’7. Some managers with a poor understanding of the satisfaction-loyalty relationship, have expressed surprise when they have discovered that satisfied customers are not always loyal – using it as evidence that investing in good customer service is pointless. Perhaps if they had monitored the percentage of their customers that were in the ‘zone of indifference’ they would have been less surprised. Building on the Harvard work of Heskett, Schlesinger, Sasser, Jones1,7 and others, Keiningham and Vavra13 coined the phrase ‘mere satisfaction’ to emphasise the extent to which merely satisfying customers isn’t enough for today’s demanding consumers. To realise the full benefits of customer satisfaction, managers must understand the difference between making more customers satisfied and making customers more satisfied. This remains a widespread problem as evidenced by the frequent use of verbal rating scales and simple single question headline measures of overall satisfaction. (See Chapters 8 and 11). In reality, there is no universally applicable curve that reflects the relationship between
5
Chapter one
6
5/7/07
09:49
Page 6
Dispelling the Myths
customer satisfaction and loyalty. Figure 1.3 merely illustrates the concept. In Chapter 14 we will explain how a company can identify its own satisfaction-loyalty curve in order to make the best decisions about how to manage customers for optimum loyalty. 1.2.3 Satisfaction is the main driver of loyalty So whilst it is true that satisfaction is not an end in itself and that ‘merely satisfied’ customers do defect, it is also true that customer satisfaction is the main driver of the real goal of customer loyalty. In their excellent article “Why satisfied customers defect”7, Harvard’s Jones and Sasser point out the obvious answer. Satisfied customers defect because they’re simply not satisfied enough. Now that we fully understand the non-linear nature of the relationship between customer satisfaction and loyalty, it is clear that to ensure loyalty, most companies will have to make their customers highly satisfied, not ‘merely satisfied’. Many studies in the 1990s concluded that customer satisfaction was a primary determinant of loyalty, including those by Rust and Zahorik14, Rust, Zahorik and Keiningham15 and Zeithaml, Berry and Parasuraman16. White and Schneider17 found that customers with better perceptions of service quality were more likely to remain customers and to tell other people about their experiences. In the Value-Profit Chain, Harvard’s Heskett et al state that the lifetime value of the most satisfied customers is 138 times greater than that of the least satisfied18. However, the idea that customer satisfaction affects companies’ financial performance only through customer loyalty under-values the importance of customer satisfaction. Johnson and Gustafsson point out that customer satisfaction has direct effects on profit, including lower costs since dissatisfied customers are much more likely to consume organisational resources through handling complaints, resolving problems and asking for help. Based on the vast database of the American Customer Satisfaction Index19, Michigan University’s Fornell et al challenge the view that customer satisfaction is less important than loyalty since it is satisfaction measures rather than loyalty data that enable organisations to take action to improve their relationship with customers. “The risk is that companies begin to focus too much on managing loyalty per se rather than building profitable loyalty through customer satisfaction.”20 It is actionability that we now turn to. 1.2.4 Taking action To maintain the high levels of customer satisfaction needed to keep customers loyal, companies must continuously improve the service they deliver. Moreover, they must focus their improvement efforts in the right areas. To make customers highly satisfied, organisations have to do best what matters most to customers. It’s no use being good at things that aren’t important to customers21.
Chapter one
5/7/07
09:49
Page 7
Dispelling the Myths
As we will explain in this book, the whole essence of CSM (customer satisfaction measurement) is about identifying the extent to which an organisation is doing best what matters most to customers (exceeding, meeting or failing to meet their requirements) and pinpointing the best opportunities for improving that performance. A good customer satisfaction survey is therefore based on customers’ most important requirements so that it can provide specific, actionable information on where the organisation is falling short in customers’ eyes and where it would achieve the best returns from investing in actions or changes to improve customer satisfaction. Chapters 12-15 explain how to produce actionable outcomes from a CSM survey. Some organisations monitor measures that are simply not actionable. In his Harvard Business Review article ‘The One Number you Need to Grow’22, Reichheld maintained that since his tests showed propensity to recommend to be the single question that had the strongest statistical relationship to future company performance, there was no point asking any other questions in customer surveys. This led to his concept of the ‘net promoter’ score (achieved by subtracting the percentage of respondents who would not be willing to recommend from those who would be willing), as the only survey measure that organisations need to monitor. We would agree that of the range of loyalty questions that can be asked, recommendation is usually the closest proxy for loyalty for most (but not all) organisations. However, apart from the fact that a single item question is much less reliable and more volatile than a composite index (see Chapter 11), what actual use is a net promoter score for decision making? Customer research is not just about knowing a score or a trend, it’s about understanding, so that managers can make the right decisions. If the headline measure (whatever it is), goes down or fails to meet the target, managers have to know what to action or change to improve it. Providing that information is the fundamental purpose of CSM. KEY POINT The main purpose of measuring customer satisfaction is to make decisions on how to improve it. Actionable information on how to make customers more satisfied is therefore a crucial outcome.
1.3 Improving satisfaction and loyalty is difficult! Improving customer satisfaction is not difficult. It’s not very difficult. It’s extremely difficult. In reality, few managers would claim that it’s easy, but organisations’ behaviour demonstrates that they don’t fully appreciate the difficulty or importance of the task. Yes, they want to improve customer satisfaction, but they also want to minimise costs.
7
Chapter one
8
5/7/07
09:49
Page 8
Dispelling the Myths
Few get this balance right. Responsibility for customer satisfaction is often vested in just one of the organisation’s departments, often called Customer Service. In some businesses its head isn’t even a main board member and, due to many organisations’ predominant focus on controlling or reducing costs, ‘quick wins’ to improve customer satisfaction become highly attractive, if not the only option for the ‘head of customer service’. So desperate are many managers to make a difference at no, or virtually no cost, that they become real suckers for the latest quick fix hype that they’ve heard at a conference or read in a book. 1.3.1 My daughter’s ruined the policy document One of the authors recently attended a conference where the keynote speaker waxed lyrical about the imperative of touching customers’ emotions and related the following anecdote to illustrate how organisations could attain this great prize at very modest cost. A customer of a UK insurance company, he said, telephoned the call centre asking for a replacement policy document as her daughter had scribbled all over the original. It transpires that this company gave each of its call centre operatives a £25 budget to use any way they wished to improve customer satisfaction. The operative involved used some of this budget to enclose a pack of crayons and a colouring pad for the child with the replacement policy document. A nice touch. The customer was no doubt very pleased. It may or may not have influenced the customer’s loyalty behaviour at renewal time. But even if it did, how much difference is this kind of approach going to make to the ability of a large insurance company with millions of customers to achieve the financial benefits of maximising customer satisfaction and loyalty? According to Barwise and Meehan21, not much. They maintain that: “Branding and emotional values are great if you are already providing an excellent functional product or service. Outside the box strategy is terrific – when it works. But because even some of the best organizations are performing badly on the basics, we recommend that they start inside the box, ensuring that they reliably meet customers’ reasonable expectations on the product or service itself. Once the basics are securely in place, the organization has a solid platform for great emotional branding and for more radical innovation.” It’s not the £25 budget or the crayons and colouring pad that are the problem. It’s the fact that many organisations place considerable emphasis and hope on strategies of this ilk, which, at best can make only a very marginal difference to the satisfaction and loyalty of the total customer base if the organisation is not consistently meeting customers’ basic requirements. To quote Barwise and Meehan again, organisations “must focus on what matters most to customers, usually the generic category benefits that all competing brands provide, more or less, and not unique brand differentiators………. Everything hinges on giving customers what matters most to them, even if that proposition seems less exciting than focusing on novelty, uniqueness or the latest
Chapter one
5/7/07
09:49
Page 9
Dispelling the Myths
management or technology fad.” They illustrate their view with the contrasting fortunes of two of the big players in the UK mobile telephony market. KEY POINT Customer satisfaction is not improved by low cost gimmicks and quick fixes. It takes real investment in the basic essentials of meeting customers’ most important requirements. 1.3.2 Doing best what matters most to customers Having been awarded identical and simultaneous licences, and with access to exactly the same technology, but following completely different strategies, One2One and Orange became the 3rd and 4th companies to enter the UK mobile phone market in September 1993 and April 1994 respectively. One2One pursued differentiation and a strong customer acquisition strategy, offering free off-peak local calls. This appealed to consumers, differentiating One2One from the business-focused strategies of the incumbents, Vodafone and Cellnet, and enabled it to acquire twice as many customers as Orange in its first six months of operation. Orange focused on getting the basics right. It was well known in the industry that customers were dissatisfied with the frustrations of mobile telephony; frequent call terminations, inability to get through due to lack of capacity and coverage, the perceived unfairness of the operators, onerous contracts, and extortionate pricing strategies such as full minute billing. Orange simply addressed these drivers of dissatisfaction, offering per-second and itemised billing and investing in network reliability. Meanwhile, One2One had attracted large numbers of price sensitive customers who clogged its limited network capacity with their free off-peak calling and became frustrated with its poor service. By the end of 1996 there was telling evidence of who was doing best what mattered most to customers. A Consumers’ Association survey23 found that whilst 14% of Orange customers reported that they could not always connect with the network, nearly four times as many One2One customers couldn’t always connect; a figure that was double the industry average. The survey also showed Orange’s customers to be far more loyal than those of the three other suppliers. Moreover, at £442 Orange had already achieved the industry’s top per customer revenue figure. One2One was over £100 behind at £341. Orange was also demonstrating that satisfied customers will pay more. By this time it was around 5% more expensive than Vodafone and Cellnet and its prices were a massive 30% higher than those of One2One. Conventional strategy would have dictated that a late entrant into a commodity market needed a USP, a ‘silver bullet’21, like One2One’s free off peak calls to stand any
9
Chapter one
10
5/7/07
09:49
Page 10
Dispelling the Myths
chance of success. Instead, by focusing on getting the basics right, Orange acquired customers at a slower rate, but kept them longer and made more profit out of each one, and in doing so delivered three times the shareholder value achieved by One2One. In August 1999 Deutsche Telekom bought One2One for £6.9 billion. Two months later Mannesmann acquired Orange for £20 billion.
1.4 Surveys don’t work Over the years we have met quite a few managers at conferences and similar events who have lost faith in their customer satisfaction surveys. Many of them work for large organisations that have been monitoring customer satisfaction data for many years but claim that whatever they do, they don’t seem to be able to improve customer satisfaction; their headline measure typically fluctuating within a fairly narrow range but showing no upward trend. Why is this happening? Is the real problem that they can’t improve customer satisfaction or that their customer satisfaction surveys simply don’t show it? There is plenty of evidence that it’s the latter. In “The one number you need to grow”22, Reichheld has this to say about customer satisfaction surveys. “Most customer satisfaction surveys aren’t very useful. They tend to be long and complicated, yielding low response rates and ambiguous implications that are difficult for operating managers to act on.” Based on research conducted by Texas University24, Griffin6 makes very similar statements, saying that customer satisfaction measures suffer from a number of problems that tend to inflate the score such as positively biased questions and flaws in self-completion surveys. This is rather like reporting to shareholders that the company is struggling to make a profit but it’s because the accounts produced by the finance department aren’t very accurate! Professor Myers25 from the Drucker School, Claremont Graduate University, has expressed serious concern about the methodologies used by many organisations to measure customer satisfaction, “from overly sophisticated experiments by academics to overly simplistic surveys conducted by many market research firms.” Many organisations even fail to ask the right questions in their customer satisfaction surveys, making it extremely unlikely that they will produce information that will help them to improve satisfaction and loyalty. We will address this problem in Chapters 4 and 5. Failing to understand the difference between customer satisfaction and other forms of market research, some organisations use scales that are not sufficiently sensitive to detect the relatively small changes in customer satisfaction that typically occur. In Chapter 8 we explain how to develop a CSM process that will make it possible to ‘move the needle’. KEY POINT Many organisations monitor flawed measures that don’t reflect how satisfied or dissatisfied customers feel and are of no value for improving customer satisfaction.
Chapter one
5/7/07
09:49
Page 11
Dispelling the Myths
When we question the people who tell us their organisation can’t improve its customer satisfaction scores, we almost invariably discover serious problems in their CSM methodology. As we point out in the next section, improving customer satisfaction and loyalty is difficult enough without attempting to achieve it with the handicap of misleading information generated by flawed surveys.
1.5 Customer surveys are not the only way of monitoring customer satisfaction Surely there are many other ways of monitoring how successfully an organisation is meeting its customers’ requirements that are easier and less costly than conducting customer satisfaction surveys and often can be done with information the organisation already possesses. Analysing complaints is a good example. Other possibilities include analysing customer defections, feedback from employees or simply monitoring whether sales are increasing. Internal metrics such as speed of solution, percentage of deliveries on time or speed of answering the telephone can provide accurate information on service quality at little cost. Mystery shopping can also generate detailed information on the customer experience. 1.5.1 Incomplete measures Customers’ feelings about their total experience with an organisation form the attitudes that drive their future behaviours. Consequently companies cannot manage this process without a complete understanding of these feelings and attitudes. Consulting customers is the only way of producing this level of understanding. All alternative measures are incomplete. Internal metrics can provide accurate and useful information on the hard factors but not the soft ones such as how friendly and helpful the staff are. The way an organisation handles problems is an important part of the customer experience, but again only part of it, so analysing complaints doesn’t come close to an understanding of customer satisfaction. Nor do exit interviews with lapsed customers, who may give views on their entire customer experience, but form only a small part of the customer base, and have levels of satisfaction that are not representative of customers generally. 1.5.2 Lagging measures Gathering feedback from lost customers highlights another disadvantage. It’s too late. Whilst a thorough exit interview process may recover a few defecting customers, the unsatisfactory aspects of their customer experience that led to their behaviour happened in the past. Organisations need much earlier feedback on areas of customer dissatisfaction in order to address the problems before they drive customers away. Equally, rising or falling sales are very good indicators of customers’ loyalty behaviours, but not of the attitudes that caused those behaviours. A good customer
11
Chapter one
12
5/7/07
09:49
Page 12
Dispelling the Myths
satisfaction measurement process provides current information on whether the organisation is succeeding or failing to make customers more satisfied with their experience. If the latter, it provides a lead indicator of problems that lie ahead for the business in time to address them. 1.5.3 Performance measures Even on the hard issues, internal metrics provide only half the picture. As Tom Peters pointed out over 20 years ago – perception is reality26. Even if customers do form mistaken perceptions about completely factual aspects of a supplier’s performance, these are the attitudes on which they are basing their loyalty and supplier selection decisions. If companies want to manage their future stream of revenues from customers, they need to be inside the customers’ heads, understanding how they see their customer experience and how it is leading them to form attitudes about the organisation that will drive their future behaviours. Feedback from staff, as well as being incomplete and often biased, can only provide information on how the supplier believes it has performed with customers. Since many customers don’t voice complaints or compliments, employees can never fully understand how customers feel. 1.5.4 Mystery shopping Some organisations view mystery shoppers as customer substitutes. True, they have to go through a typical customer journey. If they’re mystery shopping a hotel, they will stay overnight, eat dinner and breakfast and use any other facilities such as a health club. But are they the same as real customers? Of course they’re not. Professional mystery shoppers are exactly that. They are trained to observe and record many detailed aspects of the service delivery process and consequently provide highly detailed information that is very useful to operational managers. Examples might include whether the hotel receptionist was wearing a name badge, addressed the customer by name and provided clear directions to the room. They can record waiting times at check-in and check-out as well as in the restaurant. They can also make judgements on levels of cleanliness or staff friendliness and helpfulness. Technology even permits surreptitious video recording of staff, though companies need to think carefully about the implications of this for organisational culture and values27. So mystery shopping provides many practical benefits for operational managers for use in staff training, evaluation and recognition, but can’t provide understanding of how customers feel about the customer experience and the attitudes they are forming about the company. Since mystery shoppers’ profession is to make observations on companies’ customer service performance, they cease to be normal customers, becoming highly aware and often much more critical than typical customers25. Whilst this is good for their role, it doesn’t provide an accurate reflection of how normal customers feel28. Morrison et
Chapter one
5/7/07
09:49
Page 13
Dispelling the Myths
al reported other inconsistencies with mystery shopping such as males and older people producing less accurate reports than females or younger ones29. KEY POINT Mystery shoppers are not the same as real customers. Reliable information about customers’ attitudes and their likely future behaviour will be generated only from consulting the customers themselves. Smile school In their book “Loyalty Myths”, Keiningham et al use the experience of Safeway in America to illustrate the dangers of mystery shopping27. They explain how Safeway based its strategy in the 1990s on delivering superior customer service and invested in an extensive mystery shopping programme to monitor employees’ performance in delivering it. Employees were expected to do things like thank customers by name, offer to carry their groceries to the car, smile and make eye contact: all very desirable customer service behaviours which should lead to customer satisfaction. And they did. Throughout the 1990s Safeway’s customer satisfaction levels and financial returns were very high. However, in stark contrast to the teachings of the ServiceProfit Chain1, customer satisfaction and employee satisfaction were moving in opposite directions. This was because employees who failed to achieve a target mystery shopping score were sent for remedial training (called Smile School by the employees), and could be dismissed if their performance failed to improve. Moreover, female employees’ concern that the smiling and eye contact could send the wrong signals to some male shoppers was confirmed by an increase in the number of sexual harassment incidents committed by customers. This led to a number of charges filed against Safeway by the employees’ union and some individual female employees. In the end, the Service-Profit Chain wasn’t wrong. Poor employee morale adversely affected customer satisfaction and Safeway’s financial performance. According to the ACSI19, Safeway’s customer satisfaction levels rose substantially from 70% to a high of 78% by 2000 as a result of its focus on customer service. However, as problems with employees intensified, the customer satisfaction gains were virtually all lost, Safeway’s score falling back to 71% by 2003. In the European Union there are restrictions on the use of mystery shopping that prevent it being used for disciplinary purposes against individual employees. It is increasingly recognised by good employers that mystery shopping is best used for factual rather than judgemental aspects of service and to provide positive feedback and recognition to employees. Good companies also understand that it provides operational information rather than a reliable measure of how satisfied or dissatisfied customers feel.
13
Chapter one
14
5/7/07
09:49
Page 14
Dispelling the Myths
1.6 Surveys reduce customer satisfaction and loyalty It has been claimed that consulting customers to find out how satisfied they are with their customer experience and to gather feedback on improvements they would like to see actually offends customers and reduces their satisfaction and loyalty30. The argument is that since many people have busy lives, a survey is seen as such an inconvenient and unwelcome intrusion that it has a negative effect on respondents’ attitudes and behaviours. In fact, academic tests prove the opposite to be true. Paul Dholakia from Houston’s Rice University and Vicki Morwitz at New York University’s Stern School of Business were interested in the many research studies that had shown that surveys had a tendency to increase customers’ loyalty31 and their propensity to buy a company’s product32,33,34 but felt that the studies were too restricted, focusing on short term attitude change or one-off behaviour like a single purchase35,36,37. They determined to understand whether surveys had a more permanent effect on customers’ attitudes and behaviour. To do so, they undertook a field experiment with over 2,000 customers of an American financial services company. One randomly selected group of 945 customers took part in a 10-minute customer satisfaction survey by telephone. The remaining 1,064 customers were not surveyed and acted as the control group. A year later the subsequent behaviour of all the customers in the sampling frame was reviewed, demonstrating uneqivocably that customer satisfaction surveys make customers more loyal38,39. According to Dholakia and Morwitz’s conclusions: The customers who took part in the customer satisfaction survey were much more loyal. They were: More than three times as likely to have opened new accounts. Less than half as likely to have defected. More profitable than the control group. Even 12 months later people who had taken part in a ten minute customer satisfaction interview were still opening new accounts at a faster rate and defecting less than customers in the control group. Customers like to be consulted. The authors conclude that customers value the opportunity to provide feedback, positive or negative, on the organisation’s ability to meet their requirements. Surveys can also heighten respondents’ awareness of a company’s products, services or other benefits, thus also influencing their future behaviour. KEY POINT Conducting customer satisfaction surveys has a very positive effect on the organisation’s reputation in the eyes of participants.
Chapter one
5/7/07
09:49
Page 15
Dispelling the Myths
Conclusions 1. Customer satisfaction is simply a convenient phrase to describe the attitudes and feelings that customers hold about an organisation. 2. It is an irrelevance to consider the relative merits of satisfaction and loyalty. They are different links in a chain of cause and effect – satisfaction attitudes driving loyalty behaviours. Both must therefore be monitored and managed to achieve organisational success. 3. Since attitudes precede behaviours, customer satisfaction is a lead indicator of future organisational performance. Loyalty behaviours are extremely important but are lagging measures. 4. It is true that satisfied customers often defect in some markets. That’s because they’re not satisfied enough. 5. To reap the full benefits of customer loyalty, companies need to make customers highly satisfied. The zone of indifference, or ‘mere satisfaction’ is not good enough. This highlights the importance of understanding the difference between making more customers satisfied and making customers more satisfied. 6. Even though the relationship between satisfaction and loyalty is not linear, it is widely recognised that satisfaction is the main driver of loyalty. 7. Since customers’ loyalty behaviours are driven by their attitudes (primarily satisfaction levels), loyalty must be managed through satisfaction rather than directly, emphasising the importance of producing actionable outcomes from customer satisfaction surveys. 8. Many organisations have failed to use the information generated by customer surveys to improve satisfaction. This is not because customer satisfaction surveys don’t work but because many are based on flawed methodologies. 9. Even with accurate and actionable information from surveys, it is extremely difficult to improve customer satisfaction. Many organisations attempt to achieve it on the cheap, forcing the managers responsible to opt for faddish quick wins rather than the long game of getting the basics right and doing best what matters most to customers. 10. Some also attempt to monitor it using misleading substitute measures such as internal performance metrics, complaints analysis or mystery shopping. 11. In the light of conclusions 8, 9 and 10 together, it’s not surprising that most companies do not achieve sufficiently high levels of customer satisfaction and loyalty to derive the full financial benefits. 12. Organisations that conduct professional customer satisfaction surveys can expect their CSM process to have a positive impact on customers’ views of the company.
15
Chapter one
16
5/7/07
09:49
Page 16
Dispelling the Myths
References 1. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 2. McGovern, Court, Quelch and Crawford (2004) “Bringing Customers into the Boardroom”, Harvard Business Review, November 3. Reichheld, Markey and Hopton (2000) "The Loyalty Effect – the relationship between loyalty and profits”, European Business Journal 12(3) 4. Bhote, Keki R (1996) "Beyond Customer Satisfaction to Customer Loyalty: The Key to Greater Profitability”, American Marketing Association 5. Gitomer, Jeffrey (1998) "Customer Satisfaction is Worthless, Customer Loyalty is Priceless”, Bard Press 6. Griffin, Jill (2002) "Customer Loyalty: How to Earn it, How to Keep it”, JosseyBass, San Francisco 7. Jones and Sasser (1995) "Why Satisfied Customers Defect”, Harvard Business Review 73, (November-December) 8. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty Measurement”, 3rd Edition, Gower, Aldershot 9. Stum and Thiry (1991) "Building Customer Loyalty”, Training and Development Journal, (April) 10. Reichheld, Frederick (1993) "Loyalty-Based Management”, Harvard Business Review 71, (March-April) 11. Stewart, Mark (1996) "Keep the Right Customers”, McGraw-Hill, London 12. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California 13. Keiningham and Vavra (2001) "The Customer Delight Principle”, McGraw-Hill, Chicago 14. Rust and Zahorik (1993) "Customer satisfaction, customer retention and market share”, Journal of Retailing 69(2) 15. Rust, Zahorik and Keiningham (1994) "Return on Quality (ROQ): Making service quality financially accountable”, Marketing Science Institute, Cambridge, Massachusetts 16. Zeithaml, Berry and Parasuraman (1996) "The behavioral consequences of service quality”, Journal of Marketing 60 17. White and Schneider (2000) "Climbing the Commitment Ladder: The role of expectations disconfirmation on customers' behavioral intentions”, Journal of Service Research 2(3) 18. Heskett, Sasser and Schlesinger (2003), "The Value-Profit Chain”, Free Press, New York 19. The American Customer Satisfaction Index, www.theacsi.org 20. Fornell, Claes et al (2005) "The American Customer Satisfaction Index at Ten Years: Implications for the Economy”, Stephen M Ross School of Business, University of Michigan
Chapter one
5/7/07
09:49
Page 17
Dispelling the Myths
21. Barwise and Meehan (2004) "Simply Better: Winning and keeping customers by delivering what matters most”, Harvard Business School Press, Boston 22. Reichheld, Frederick (2003) "The One Number you Need to Grow”, Harvard Business Review 81, (December) 23. Which? Online (1996) "Mobile Phone”, Consumers’ Association, (December) 24. Peterson and Wilson (1992) "Measuring Customer Satisfaction: Fact and Artifact”, Journal of the Academy of Marketing Science, (Winter) 25. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other measurement issues”, American Marketing Association, Chicago, Illinois 26. Peters and Austin (1986) "A Passion for Excellence”, William Collins, Glasgow 27. Keiningham, Vavra, Aksoy and Wallard (2005) "Loyalty Myths”, John Wiley and Sons, Hoboken, New Jersey 28. Szwarc, Paul (2005) "Researching Customer Satisfaction and Loyalty” Kogan Page, London 29. Morrison, Colman and Preston (1997) "Mystery customer research: cognitive processes affecting accuracy”, Journal of the Market Research Society 46 (4) 30. Snaith, Tim (2006) "Why customer research is undermining customer loyalty”, Customer Management 14 (6) 31. Reinartz and Kumar (2000) "On the Profitability of Long-Life Customers in a Non-contractual Setting: An Empirical Investigation and Implications for Marketing”, Journal of Marketing 64 32. Morwitz, Johnson and Schmittlein (1993) "Does Measuring Intent Change Behavior?”, Journal of Consumer Research 20 (June) 33. Fitzsimons and Morwitz (1996) "The Effect of Measuring Intent on Brand-Level Purchase Behavior”, Journal of Consumer Research 23 (June) 34. Fitzsimons and Williams (2000) "Asking Questions Can Change Choice Behavior: Does it do so Automatically or Effortfully?”, Journal of Experimental Psychology: Applied, 6 (3) 35. Spangenberg and Greenwald (1999) "Social Influence by Requesting SelfProphecy”, Journal of Consumer Psychology 39 (August) 36. Morwitz and Fitzsimons (2000) "The Mere-Measurement Effect: Why Does Measuring Purchase Intentions Change Actual Purchase Behavior?”, Working Paper New York University, New York 37. Fitzsimons and Shiv (2001) "Nonconscious and Contaminative Effects of Hypothetical Questions on Subsequent Decision Making”, Journal of Consumer Research 28, (September) 38. Dholakia and Morwitz (2002) "How Surveys Influence Customers”, Harvard Business Review 80 (5) 39. Dholakia and Morwitz (2002) "The scope and persistence of mere-measurement effects: Evidence from a field study of customer satisfaction measurement”, Journal of Consumer Research 29 (2)
17
Chapter two
18
5/7/07
09:50
Page 18
The benefits of customer satisfaction
CHAPTER TWO
The benefits of customer satisfaction Customer satisfaction isn’t a new concept. Just the opposite – it’s at least 200 years old. As long ago as the 18th century Adam Smith clarified the fundamental premise on which free markets operate1. He maintained that since human beings continually strive to maximise their utility (get the greatest benefit for the least cost), they migrate gradually but inexorably, to the suppliers that come closest to delivering it. In other words, they search out and stay with companies that do best what matters most to customers. Customer satisfaction is the phrase commonly used to encapsulate this phenomenon. It means that suppliers make more profit as customers become better off. 230 years later this win-win equation still fuels most markets worldwide. It’s based on the almost irresistible forces of people getting what they want. People running companies want maximum profits. Their customers want maximum ‘utility’ – the greatest possible gratification at the least cost. Unsurprisingly, the more gratifying the customer experience is, the more likely they are to repeat it, and vice-versa. This is demonstrated at macro level by 12 years of ACSI (American Customer Satisfaction Index) data showing that in the USA, changes in customer satisfaction have accounted for more of the variation in future spending growth than have any other factors including income or consumer confidence2. In other words, if American consumers are more satisfied generally by the things the American economy is delivering to them (and by the way in which they are delivered), their rate of spending increases. If their satisfaction goes down, so does their spending and the country’s economic growth.
At a glance This chapter explains why customer satisfaction matters and will cover: a)How customer satisfaction translates into profits through Customer Lifetime Value. b)The close relationship between customer satisfaction and employee satisfaction. c)How customer satisfaction affects returns to shareholders. d)The macro-economic implications of customer satisfaction. e)The arguments for customer satisfaction in the public and not-for-profit sectors.
Chapter two
5/7/07
09:50
Page 19
The benefits of customer satisfaction
2.1 Benefits for companies It is now widely accepted that whilst the ultimate goal of a private sector company may be to deliver profits to shareholders, it will be achieved through delivering results to customers3. This is based on the fundamental psychological principle that people will want more of the experiences that give them pleasure whilst avoiding the unpleasing or dissonant experiences4. It explains why it is more profitable to keep existing customers than to win new ones – five times more profitable on average, according to figures released by the American Department of Consumer Affairs as long ago as 1986. This section outlines some of the commonly recognised reasons why customer satisfaction matters for private sector companies. Much of the data quited is from America, simply because it has much more published data than other countries on the financial outcomes of customer satisfaction. Since the relationships described are economic rather than attitudinal or cultural, they are applicable to all developed economies. KEY POINT The profitability of customers increases the longer you keep them. 2.1.1 Customer Lifetime Value Customer retention is more profitable than customer acquisition because the value of customers typically increases over time5,6,7,8. Shown in Figure 2.1, this is due to the following factors: Acquisition – the cost of acquiring customers occurs almost exclusively in their first year with the company (i.e. before and as they become customers). Base profit – is constant, but often will not begin to offset acquisition costs until the second year or later. Revenue growth – as customers stay, and provided they are satisfied, they tend to buy more of a company’s products/services as their awareness of the product portfolio grows. Cost savings – long term customers cost less to service, since they are more familiar with the organisation’s procedures and more likely to get what they expect. Referrals – highly satisfied customers will recommend companies to their friends. Referral customers eliminate most of the cost of acquisition, and they also tend to be better customers because they are like existing customers. Price premium – long-term customers who are very satisfied will also be prepared to pay a price premium since they trust the supplier to provide a product/service that is good value for them. Summarised by Harvard as the 3Rs (retention, related sales and referrals), and based on 30 years of research, Harvard concluded that ‘loyal’ customer behaviours explain differences in companies’ financial performance more than any other factor. Harvard and others have also pointed out that customer satisfaction is the main driver of customer loyalty3,9,10.
19
Chapter two
09:50
Page 20
The benefits of customer satisfaction
FIGURE 2.1 Customer value increases over time 5 7 Price Premium 6 Referrals 5 Annual Customer Profit
20
5/7/07
Cost savings
4 3
Revenue growth
-2 1
Base Profit 0
Year 0
Year 1
Year 2
Year 3
Year 4
Year 5
Year 6
Year 7 Acquisition cost
-1
2.1.2 Links with employee satisfaction The link between customer satisfaction and employee satisfaction has been recognised by the work of Harvard and others – Harvard labelling it “the customer-employee satisfaction mirror”. They have demonstrated not only that employee satisfaction typically produces higher levels of customer satisfaction (since more satisfied employees are more highly motivated to give good service), but also that higher customer satisfaction produces higher employee satisfaction since employees prefer working for companies that have high levels of customer satisfaction and low levels of problems and complaints. More satisfied employees stay longer, keeping valuable expertise and customer relationships within the organisation. Conversely, high staff turnover has a negative effect on customer satisfaction3,11. This was fully reflected in the Safeway example quoted in Chapter 1. At the same time Safeway’s rival Kroger also had problems with employee satisfaction resulting in a similar fall in its customer satisfaction index to Safeway’s level of 71%16. 2.1.3 Sales and profit Some companies have built fully validated models that precisely quantify the relationship between employee satisfaction, customer satisfaction and financial performance. These include the Canadian Imperial Bank of Commerce (CIBC) who built a service-profit chain model demonstrating that each 2% increase in customer loyalty would generate an additional 2% in net profit. They also quantified the causal links in the chain back from customer loyalty to customer satisfaction and to
Chapter two
5/7/07
09:50
Page 21
The benefits of customer satisfaction
employee satisfaction. For example they found that to produce an additional 2% gain in customer loyalty an improvement of 5% in employee satisfaction was required12. An example from retailing is Sears Roebuck who, using a similar profit chain modelling approach to that adopted by CIBC, demonstrated that a 5% gain in employee satisfaction drives a 1% gain in customer satisfaction which, in turn, lead to an additional 0.5% increase in profit13. Aggregate data from the American Customer Satisfaction Index have also demonstrated a very strong link between customers’ satisfaction with individual companies and their propensity to spend more with them in future. In fact every 1% increase in customer satisfaction is associated with a 7% increase in operational cash flows, and the time lag is as short as three months, although this does vary by sector14. 2.1.4 Shareholder value The University of Michigan has reported that the top 50% of companies in the ACSI generated an average of $42 billion of shareholder wealth (Market Value Added), as against $23 billion for the bottom 50%15. Based on the ACSI database, a 1% increase in customer satisfaction drives a 3.8% increase in stock market value. Between 1997 and 2003 (a period that saw huge rises and falls in stocks) share portfolios based on the ACSI out-performed the Dow by 90%, the S&P by 208% and the NASDAQ by 344%. Almost echoing the words of Adam Smith, Professor Fornell says, the reason for this is simply that “…our economic system works. It was designed with the idea that sellers should compete for buyers’ satisfaction. Satisfied customers reward companies with, among other things, their repeat business, which has a huge effect on cumulative profits”2. KEY POINT Companies with higher customer satisfaction produce better returns for shareholders. This is consistent with an earlier study based on winners of the Malcolm Baldrige quality awards (the largest single component of which is customer satisfaction). When challenged in a lecture that many Baldrige winners had not been financially successful, quality guru Joseph Juran responded that he was sure that a share portfolio based on Baldrige Award winners would out-perform a general tracker fund. When Business Week decided to test this theory, Juran was proved correct, with the Baldrige-based fund achieving an 89% return against 33% overall for the Standard & Poor’s 50017.
21
Chapter two
09:50
Page 22
The benefits of customer satisfaction
FIGURE 2.2 Customer satisfaction and shareholder value in the USA 16 Top and Bottom 50% of ACSI Firms $45 $39.7
$40
$35.7 $34.0
$35 MVA (billions)
22
5/7/07
$30
$26.1
$27.7
$25 $20.1
$20 $15
$14.7
$17.6
$10.1 $7.9
$10 $5 $0
$1.8 1994
$3.6 1995
$11.2
$9.4 $6.7
$5.0
$5.1
Top 50% ACSI Firms Bottom 50% ACSI Firms $2.8
1996
1997
1998
1999
2000
2001
2002
Eleven years of ACSI data have also produced many company-specific examples, both positive and negative, of the link between customer satisfaction and shareholder value2. Take two contrasting examples in the computer industry. Since its inclusion in the ASCI in 1997, Dell improved customer satisfaction and revenues at first by a significant margin. As a growing proportion of PC purchases are for replacement, the customer satisfaction – loyalty link is increasingly important in this sector. This proved unfortunate for Gateway, a direct competitor of Dell, whose large falls in customer satisfaction (despite extensive price cutting) were matched by its poor financial performance. However, in more recent years Dell’s customer satisfaction has also fallen, especially in 2005 when it fell substantially by 5% to 74%. It was now Dell’s turn to see aggressive cost cutting matched by poorer service and a large fall in customer satisfaction; the company eventually admitting its customer service problems. By 2006, Dell’s share price reached a five year low, against a backdrop of substantial increases in stock prices generally over the same period. One of the biggest declines in customer satisfaction occurred in the telecoms sector – a 26% fall for Qwest Communications between 1995 and 2002. Perhaps Qwest’s shareholders should have been monitoring customer satisfaction. The share price didn’t react until 2000, but since then the company has lost 90% of its market value (and the shareholders most of their investment). In 1994 Hyundai had the lowest customer satisfaction of any car manufacturer, down at 68%, and with a very poor reputation for quality and reliability. Ten years on it had gradually raised customer satisfaction to 81% (and subsequently to 84% by 2006), largely through improvements in product and service quality. The customer satisfaction gains have been fully reflected in Hyundai’s higher sales and stock price.
Chapter two
5/7/07
09:50
Page 23
The benefits of customer satisfaction
A well known case study from the Harvard profit chain literature3,18 is MBNA. Over a couple of decades, MBNA climbed from the 38th to the largest issuer of credit cards in the USA. The rise started in the early 1980s when the company identified that it was barely keeping its customers long enough for them to become profitable5. MBNA’s President, Charles Cawley, responded by basing the company’s future strategy on maximising customer lifetime value through delivering superb customer satisfaction. For 20 years MBNA has measured customer satisfaction daily and contributes cash to a bonus fund every day that its customer satisfaction index is above target. The accumulated bonus is paid to all staff every quarter and typically enables employees to boost their earnings by 20%. Customer satisfaction-related pay is covered in Chapter 15.
2.2 Benefits for the economy Since there is no comparable information source in the UK, the evidence outlined in this section is drawn from the conclusions of the University of Michigan based on American Customer Satisfaction Index data2. As they point out, “At the macro level, customer satisfaction and household spending are at the hub of a free market. In one way or another, everything else – employment, prices, profits, interest rates, production and economic growth itself – revolve around consumption.” If customers reduce their spending the economy moves into recession. If they increase it, albeit by a very small percentage, the positive effects on economic growth will be significant. As we have seen, customers reward companies that satisfy them and punish those that don’t. This fact is fundamental to the way free markets operate – driving them to deliver as much customer satisfaction as they can in the most efficient way possible. This phenomenon has been strengthened by the growing power of customers based on their higher levels of education and confidence plus dramatically increased sources of information. This has resulted in the production-led economies of the past turning into today’s customer-driven markets. There is also growing evidence that today’s affluent customer in developed economies has become more interested in quality of life (doing things) than material wealth (owning things)19. KEY POINT Customers today are placing more emphasis on experiences rather than possessions. 2.2.1 The value of experiences All the way back to Maslow, studies have shown that once the basic needs of food and shelter are met, extra material wealth does not necessarily lead to greater happiness20,21,22,23,24. Summarising this mountain of research in 1999, Frank25 concluded that “increases in our stocks of material goods produce virtually no measurable gains in our psychological or physical well being. Bigger houses and faster cars, it seems, don’t make us any happier.” Clearly, quality of life is becoming more
23
Chapter two
24
5/7/07
09:50
Page 24
The benefits of customer satisfaction
important than quantity of possessions. Van Boven and Gilovich’s 2003 study19 demonstrated that experiential purchases (doing) brought people more long term satisfaction and happiness than material ones (having). They concluded that experiences are more central to a person’s identity than possessions, they tend to be more favourably viewed as time passes and they have greater social value (in other words they are more interesting to talk about). Although most of the preceding research was conducted in America, similar trends have been identified in the UK and Europe. Future Foundation report that whilst materialistic accumulation remains important to European consumers, they are increasingly “seeking satisfaction from the growing ‘experience economy’.”26 This involves a greater emphasis on hedonism, self-development, holidays and ethical consumption. In a separate study of 1,000 UK adults, almost 50% (and over 50% of baby boomers) chose personal fulfilment as their main priority in life, more than double the number that selected it 20 years ago27. According to Future Foundation: “Our affluent society prioritises personal fulfilment and this culture fuels increasing and more diverse leisure participation.” Pine and Gilmore have labelled this phenomenon ‘the experience economy’, suggesting that developed countries have evolved not just from manufacturing to service economies but on a stage further28,29. In experience economies, suppliers should focus on providing customers not just with a product or service, but with a satisfying and memorable experience. Driven forward by more literature30,31, there is growing awareness amongst organisations of the importance of the customer experience. This should be seen as the total customer experience, in other words, the sum of all functional and emotional benefits perceived by customers as a result of their experience with a supplier32. Suppliers should therefore consider all the cues that influence the total experience that the customer perceives33 and aim to orchestrate them into a planned and consistent message34. 2.2.2 The role of customer satisfaction One could therefore say that whilst GDP is a measure of the amount, or quantity of economic activity (having), customer satisfaction is a measure of its quality (experiencing). If it is true that people seek to repeat high quality, pleasurable experiences but avoid those of low quality, we would expect to see a relationship between these two indicators. Analysts at the University of Michigan have identified “a significant relationship between ACSI changes and subsequent GDP changes, a relationship that operates via consumer spending”2. Whilst it is obvious that the level of consumer spending is based on the amount of money that people have to spend, it is crucial to understand that it is also affected by their willingness to spend it35. Whilst some spending is down to necessity (e.g. the food and shelter necessary for survival), most spending in developed economies is beyond that level and is driven by the anticipated amount of satisfaction that the spending will produce. To quote the University of Michigan again, “The importance of this can hardly be overstated.
Chapter two
5/7/07
09:50
Page 25
The benefits of customer satisfaction
Since its inception, the data show that ACSI has accounted for more of the variation in future spending growth than any other factor, be it economic (income, wealth) or psychological (consumer confidence).”2 KEY POINT As a measure of the quality as opposed to the quantity of GDP, customer satisfaction is a key lead indicator of consumers’ willingness to spend and relates strongly to economic growth.
2.3 Benefits in the public and not-for-profit sectors Most of this discussion has focused on the bottom-line arguments for customer satisfaction, which are taken by many to be more or less self-evident. But profitability, per se, is not a prime consideration in the public or not-for-profit sectors. What then is the argument for satisfying customers in these sectors? 2.3.1 Financial arguments Although not motivated by profit, organisations in these sectors must be very aware of the cost implications of dissatisfied customers. Dissatisfied customers complain more, soaking up valuable resources in dealing with their complaints5. It has also been shown that customer satisfaction and employee satisfaction are related (the “mirror effect”)3,11. Organisations with satisfied customers are more likely to have satisfied and engaged employees, which in turn leads to lower turnover and absenteeism, thus lowering the cost of employment. 2.3.2 Reputation Organisations with more satisfied customers tend to have a better public image and reputation. Such reputation benefits often lag somewhat behind actual performance, so can sometimes seem unfair, but in time they tend to gravitate towards an accurate depiction of an organisation’s ability to satisfy customers. Ultimately the aim for many organisations in these sectors is to establish trust with the public in general. A good reputation built on a solid basis of high levels of customer satisfaction is key to establishing that trust. 2.3.3 Culture Similar benefits accrue internally for organisations that are good at satisfying their customers. As well as having more satisfied employees, organisations with satisfied customers tend to have better morale, and employees are more likely to feel pride in their place of work. It becomes easier both to recruit and retain good staff under these circumstances.
25
Chapter two
26
5/7/07
09:50
Page 26
The benefits of customer satisfaction
2.3.4 For the public benefit Finally, and perhaps most compellingly for the public sector, customer satisfaction is the ultimate arbiter of the success of public organisations. Such organisations exist to serve the public, rather than shareholders or owners, and as such their success should be judged by their ability to deliver what the public wants. This has been the policy of successive governments in the UK for many years now, although they have failed, so far, to implement an accurate and consistent CSM system to monitor their success.
2.4 Conclusions It’s now over 20 years since the American Department of Consumer Affairs informed the world that keeping existing customers is far more profitable than winning new ones. This is because the profitability of customers grows over time – as long as their requirements are met or exceeded. In summary, customer satisfaction pays because: 1. They gradually buy a wider range of a supplier’s products or services. 2. They become less price sensitive. 3. They cost less to service. 4. They recommend the supplier more and evidence shows that referred customers tend to be much more loyal than those acquired through sales and marketing activities. 5. Every customer that a company keeps rather than loses and every customer that it gains through recommendation reduces the need for the heavy investment required to win new customers. 6. Some companies, like the Canadian Imperial Bank of Commerce, have calculated the precise financial value of each 1% gain in customer satisfaction. This type of ‘profit chain modelling’ requires extensive information and extremely complex statistical modelling but has considerable value for investment decisions, especially if the profit chain is traced back to employee satisfaction. 7. Without realising it, many companies are still falling into the trap of failing to keep their customers long enough to reap anything like the full reward of the 3Rs. To rectify this problem companies need to follow MBNA’s example and develop an accurate understanding of their current customer lifetime value before developing and implementing a strategy to increase it. 8. At the macro level, organisations like Harvard and Michigan Business Schools (supported by huge databases such as the ACSI) have published copious evidence that companies with highly satisfied customers are far more successful financially than those providing poor service. 9. There is now compelling evidence that people in developed economies are increasingly driven by experiences or quality of life rather than material possessions. At the national level customer satisfaction is a measure of the quality (as opposed to the quantity) of GDP.
Chapter two
5/7/07
09:50
Page 27
The benefits of customer satisfaction
10. Since people seek to repeat pleasurable experiences but avoid unpleasant ones, it is not surprising that the University of Michigan has identified a pivotal role for customer satisfaction in determining future customer spending and hence economic growth.
References 1. Smith, Adam (1776) "The Wealth of Nations” 2. Fornell, Claes et al (2005) "The American Customer Satisfaction Index at Ten Years: Implications for the Economy Stock Returns and Management”, Stephen M Ross School of Business, University of Michigan 3. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, New York 4. Festinger, Leon (1957) "A Theory of Cognitive Dissonance”, Stanford University Press, Stanford 5. Reichheld, Frederick (2001) "The Loyalty Effect” 2nd edition, Harvard Business School Press, Boston 6. Rust and Zahorik (1993) "Customer satisfaction, customer retention and market share”, Journal of Retailing 69(2) 7. White and Schneider (2000) "Climbing the Commitment Ladder: The role of expectations disconfirmation on customers' behavioral intentions”, Journal of Service Research 2(3) 8. Reichheld and Sasser (1990) "Zero Defections, Quality Comes to Services”, Harvard Business Review 68, (September-October) 9. Sasser and Jones (1995) "Why Satisfied Customers Defect”, Harvard Business Review 73, (November-December) 10. Rust, Zahorik and Keiningham (1996) "Making Service Quality Financially Accountable” in "Readings in Service Marketing”, Harper Collins 11. Heskett and Schlesinger (1997) "Out in Front, Building High Capability Service Organisations”, Harvard Business School Press, Boston 12. Tofani, Joanne (2000) “The People Connection: Changing Stakeholder Behavior to Improve Performance at CIBC”, Conference paper, ASQ Customer Satisfaction and Loyalty Conference, San Antonio, Texas 13. Rucci, Kern and Quinn (1998) "The Employee-Customer Profit Chain at Sears”, Harvard Business Review 76, (January-February) 14. Gruca and Rego (2003) "Customer Satisfaction, Cash Flow and Shareholder Value”, Marketing Science Institute 15. Fornell, Claes (2001) "The Science of Satisfaction”, Harvard Business Review 79, March-April 16. The American Customer Satisfaction Index HYPERLINK "http://www.theacsi.org" www.theacsi.org 17. (1993) "Betting to Win on the Baldie Winners”, Business Week October 18th
27
Chapter two
28
5/7/07
09:50
Page 28
The benefits of customer satisfaction
18. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 19. Van Boven and Gilovich (2003) "To do or to have? That is the question”, Journal of Personality and Social Psychology 85 20. Maslow, A H (1943) "A theory of human motivation”, Psychological Review 50 21. Richins and Dawson (1992) "A consumer values orientation for materialism and its measurement: Scale development and validation”, Journal of Consumer Research 19 22. Kasser and Ryan (1993) "A dark side of the American dream: Correlates of financial success as a central life aspiration”, Journal of Personality and Social Psychology 65 23. Kasser and Ryan (1996) "Further examining the American dream: Differential correlates of intrinsic and extrinsic goals”, Personality and Social Psychology Bulletin 22 24. Kasser, T (2002) "The High Price of Materialism”, MIT Press, Boston 25. Frank, R H (1999) "Luxury fever: Why money fails to satisfy in an era of excess”, Free Press, New York 26. Quorin, M (2006) "Personal Aspirations in Europe”, Future Foundation, London 27. Quorin, M (2006) "A Life of Leisure”, Future Foundation, London 28. Pine and Gilmore (1998) "Welcome to the Experience Economy”, Harvard Business Review 76, (July-August) 29. Pine and Gilmore (2002) "The Experience Economy: Work is Theatre and Every Business a Stage”, Harvard Business School Press, Boston 30. Lasalle and Britton (1999) "Priceless: Turning Ordinary Products into Extraordinary Experiences”, Harvard Business School Press, Boston 31. Diller, Shedroff and Rhea (2006) "Making Meaning: How Successful Businesses Deliver Meaningful Customer Experiences”, New Riders Publishing 32. Shaw and Ivens (2002) "Building Great Customer Experiences", Palgrave Macmillan, Basingstoke 33. Berry, Carbone and Haeckel (2002) "Managing the Total Customer Experience", MIT Sloan Management Review Vol 43 No2 (Spring) 34. Zaltman, Gerald (2003) "How Customers Think", Harvard Business School Press, Boston 35. Katona, George (1979) "Toward a macropsychology” American Psychologist 34(2)
Chapter three
5/7/07
09:51
Page 29
Methodology essentials
CHAPTER THREE
Methodology essentials Chapter 2 outlined the plentiful evidence that high levels of customer satisfaction pay. As much of that information is more than ten years old, one would have expected to see huge progress in companies’ ability to satisfy customers over the last decade, especially since most organisations claim that customer satisfaction is an important goal. That progress hasn’t happened. This chapter considers the reasons for this failure and suggests some solutions.
At a glance In this chapter we will a) Present the evidence that customer satisfaction is not improving. b) Provide a definition of customer satisfaction plus an explanation of the concept and how it affects organisational performance. c) Review the reasons why many organisations fail to take effective action to improve customer satisfaction. d) Explain the necessity for measures. e) Highlight the fundamental essentials of an accurate CSM methodology.
3.1 Customer satisfaction isn’t improving Based on over ten years of data from the American Customer Satisfaction Index, Figure 3.1 shows that customer satisfaction in the USA remains below its starting point in 19941. Whilst there is no comparable trend data from the UK, the satisfaction benchmarking database of specialist customer research company, The Leadership Factor, based on several hundred customer satisfaction surveys per annum, leads to a similar conclusion and is shown in Figure 3.22. KEY POINT Despite its importance, many organisations are failing to improve customer satisfaction.
29
Chapter three
09:51
Page 30
Methodology essentials
FIGURE 3.1 Customer satisfaction trends in the USA 75
ACSI 1994 to 2006
74 73 72
2006
2006
2004
2005
2005
2003
2002
2001
2000
1999
1998
1997
1996
70
1995
71
1994
30
5/7/07
FIGURE 3.2 Customer satisfaction trends in the UK 81% 80% 79% 78% 77% 76% 75% 1997
1998
1999
2000
2001
2002
2003
2004
2007
So in the face of all this overwhelming evidence about the benefits of customer satisfaction, and despite the lip service paid to it, why is it that companies have been so unsuccessful at improving it? There are three reasons: 1) People don’t understand customer satisfaction. More specifically they don’t understand the implications of different levels of customer satisfaction and the level they need to achieve to benefit their own organisation. 2) They don’t have an accurate measure of customer satisfaction so they lack the most fundamental tool for making sure the organisation is achieving the required level of satisfaction. Since the science of measuring satisfaction is now at least two decades old, there is little excuse for this.
Chapter three
5/7/07
09:51
Page 31
Methodology essentials
3) Even if they do have accurate and actionable measures, they don’t take the necessary action – often linked to the first point but not always. We’ll consider the first and third reasons initially before moving on to outline the essential aspects of a CSM methodology that will provide an accurate measure of how satisfied or dissatisfied customers feel as well as reliable information on how to improve it.
3.2 Understanding customer satisfaction 3.2.1 A definition of customer satisfaction The most straight forward definition of customer satisfaction was provided by American marketing guru Philip Kotler. “If the product matches expectations, the consumer is satisfied; if it exceeds them, the consumer is highly satisfied; if it falls short, the consumer is dissatisfied.”3 Crucial in this definition is the view that satisfaction is a relative concept encompassing the customer’s expectations as well as the performance of the product4. Whilst early definitions were product focused, it has since been recognised that customer satisfaction applies equally to services as well as to any individual element of a customer’s product or service experience. Hence, Oliver has defined customer satisfaction as “a judgement that a product or service feature, or the product or service itself, provided (or is providing) a pleasurable level of consumption-related fulfilment, including levels of under- or over-fulfilment.”5 Although customers make satisfaction judgements about product and service, customer satisfaction should not be confused with service quality. Firstly, customer satisfaction is broader in scope than service quality, which is “only one component of a customer’s level of satisfaction.”6 Secondly, a product or service must be experienced to make a satisfaction judgement, but that is not an essential prerequisite for developing an attitude about quality.5 For example, it is possible for people to form opinions about the quality of a car or the service quality delivered by staff in a hotel based on advertising, reputation or word-of-mouth, whereas it is not possible to be satisfied or dissatisfied with them without driving the car or staying in the hotel. Thirdly, although most quality management academics and practitioners would subscribe to the ‘user-based approach’ rather than the ‘technical approach’7, judgements of satisfaction are typically much more subjective and emotional than quality judgements. It was this principle that prompted Tom Peters to coin his famous “perception is reality” phrase. He emphasised that whilst customers’ judgements may be “idiosyncratic, human, emotional, end-of-the-day, irrational, erratic”8, they are the attitudes on which customers everywhere base their future behaviours. As Peters says, the possibility that customers’ judgements are unfair is scant consolation once they have taken their business elsewhere.
31
Chapter three
32
5/7/07
09:51
Page 32
Methodology essentials
DEFINITION Customer satisfaction, or dissatisfaction, is the feeling a customer has about the extent to which their experiences with an organisation have met their needs. 3.2.2 Attitudes and behaviours So customer satisfaction is a relative concept. It’s the customers’ subjective judgement or feeling, the attitudes they hold, about the extent to which their requirements have been met by a supplier. However, satisfaction is rarely an end in itself because whilst it is pleasing that customers hold favourable attitudes, that’s of little value if they’re not behaving like loyal customers. As we saw in Chapter 2, it is customers’ behaviour that enables companies to achieve their objectives, particularly desirable behaviours such as buying more often, spending more or recommending the organisation to others. The reason why the measurement of customer satisfaction is so important is that attitudes drive behaviours, so customer satisfaction is a key lead indicator of future customer behaviours and, therefore, future company performance. However, to maximise the benefit of this powerful management tool, it is vital to separate the attitudinal and behavioural aspects of customer satisfaction, as illustrated in Figures 3.3 and 3.4. KEY POINT Satisfaction is an attitude, loyalty is a behaviour. FIGURE 3.3 Attitudes and behaviours Customer attitudes
Customer behaviour
Organisational outcomes
CSM is totally focused on the first oval in the diagrams – measuring customers’ attitudes about how satisfied they feel with the organisation. As lead indicators, these attitudes provide by far the most useful data for managing organisational performance. Obviously, customers’ behaviours, especially their loyalty behaviours are extremely important to organisations, but they are lagging indicators. By the time a customer has defected or chosen an alternative supplier for a related product or service, the opportunities have been missed. That is not to say that customer behaviours should not be monitored. Information such as customer defection rates, average spend and complaints are all extremely useful but they should not be confused with measures of customer satisfaction. FIGURE 3.4 How satisfaction translates to profit Customer satisfaction
Customer loyalty
Company profit
Chapter three
5/7/07
09:51
Page 33
Methodology essentials
KEY POINT Satisfaction is a lead indicator. Loyalty, sales and other measures of organisational performance are lagging ones. 3.2.3 How satisfaction affects loyalty Another widely misunderstood aspect of customer satisfaction is how it translates into loyalty and profit. To say that satisfied customers will be more loyal than dissatisfied ones, whilst broadly true, is far too simplistic. As we said in Chapter 1, there are different levels of satisfaction and these can affect companies in widely differing ways. Whilst it is obvious that dissatisfied customers will rarely be loyal whereas highly satisfied ones will be, what about those in the mid ranges of satisfaction? As Harvard point out in figure 3.59, the zone of indifference isn’t good enough for most companies. To maximise customer lifetime value, suppliers have to make customers so satisfied that there is no point even thinking about switching10. Jones and Sasser11 point out that most companies don’t understand the extent to which high levels of satisfaction rather than ‘mere satisfaction’12 are necessary. FIGURE 3.5 The relationship between satisfaction and loyalty Apostle
100% Zone of affection
Loyalty
80% Zone of indifference 60% 40% Zone of defection 20% Satisfaction Saboteur
1
5
10
KEY POINT Satisfaction does not affect organisational performance in a linear manner. Highly satisfied customers are much more valuable than ‘merely satisfied’ ones. In our experience it would be unfair to imply that there are many companies or managers that still don’t believe in the importance of customer satisfaction. But it would be reasonable to conclude that most organisations still don’t fully understand the extent to which they need to invest in achieving very high levels of customer
33
Chapter three
34
5/7/07
09:51
Page 34
Methodology essentials
satisfaction rather than settling for the ‘zone of mere satisfaction’. The same point applies to customer loyalty. Very loyal customers are far more profitable than quite loyal ones.
3.3 Companies don’t act As we said earlier, Adam Smith knew that there was no long term future for sellers that maximise profits at the expense of their customers’ gratification. So why is it that some companies still don’t seem to realise that? It’s that word ‘competition’ – the double edged sword of customer satisfaction. Too little of it, and the suppliers don’t need to satisfy their imprisoned customers, as witnessed by the disingenuous indifference to the customer experience of many public sector organisations and private sector monopolies. Too much of it, and companies’ continuing inability to master the utility equation’s cost-benefit trade-off results in too many short term decisions to reduce costs at the expense of customer satisfaction. Of course, in the long run customers are almost always the beneficiaries of very competitive markets. KEY POINT Many companies don’t understand that cost reduction is a false economy if it reduces customer satisfaction. In our many years of helping organisations to measure and improve customer satisfaction, the authors have noticed that the biggest single difference between companies at the top of The Leadership Factor’s Satisfaction Benchmark League Table2 (with the highest levels of customer satisfaction) and those at the bottom is the latter’s failure to take appropriate action to address the issues that would make the biggest difference to improving customer satisfaction. Often they take virtually no action and if they do, it is often focused on the wrong things; typically things that are cheap or easy to address rather than confronting the real issues that are upsetting customers. By contrast, companies at the top of the League take focused action to address the areas where they are least meeting customers’ requirements – whatever they are. They don’t take easy options, they do whatever it takes to “do best what matters most to customers” because they understand that if they do that, rational customers will stay and spend more with them in the future. Chapters 12 to 15 of this book explain how to identify the precise areas where making improvements would lead to the greatest gain in customer satisfaction.
3.4 The science of CSM 3.4.1 Why measure satisfaction? Phrases such as ‘you can’t manage what you don’t measure’ reflect the widely held view that without measures organisations lack the focus to make improvements even
Chapter three
5/7/07
09:51
Page 35
Methodology essentials
in areas regarded as very important. For example, introducing measures of quality, such as statistical process control was crucial to manufacturers in western economies improving their quality levels during the 1980s and 1990s. Some go further and suggest that organisations are defined by what they measure. They maintain that “what a business measures shapes employee thinking, communicates company values and channels organisational learning.”13 Conversely, employees don’t take seriously things that are not measured largely because it’s impossible to base performance management and rewards on them - a fact that was discovered by Enterprise Rent-A-Car in the 1990s14. Founded in 1957 with seven hire cars, Enterprise Rent-A-Car had grown to 50,000 vehicles 30 years later. There was, however, growing anecdotal evidence of customer dissatisfaction, which was contrary to the very customer focused ethos that the company’s founder had built from its inception. To counter this problem Enterprise began to measure customer satisfaction in 1989 but by 1994 satisfaction levels had shown no improvement. There were two reasons for this. First, the measures were not credible, mainly because sample sizes were relatively small, so the results only gave a national and regional overview and did not reach down to local operating units. Branch managers could assume that the problem of customer dissatisfaction was caused by other branches and not their own. Secondly, it didn’t matter anyway, because branch managers’ reward, recognition and promotional opportunities were based on sales growth and profitability, and no link was established between these important business measures and customer satisfaction. To improve the situation Enterprise Rent-A-Car addressed both of these problems. They made the customer satisfaction measures credible and managers accountable by massively increasing the sample size to 100 randomly sampled customers per branch per quarter and changed from self-completion questionnaires to telephone interviews. This meant over 2 million interviews per annum, conducted by an external agency. Secondly, they made sure the results were taken seriously by demonstrating the link between customer satisfaction, loyalty and profit and by making customer satisfaction a fundamental part of branch and regional managers’ performance appraisal. The result was a steady improvement in customer satisfaction over the following decade and the rise of the company to a clear market leadership position. KEY POINT The right measures are essential to effectively manage employees’ behaviour and organisational performance. 3.4.2 The accuracy of customer satisfaction measures If organisations make decisions and take action on flawed information, it would not be surprising if their efforts resulted in little gain. This is a major reason for many
35
Chapter three
36
5/7/07
09:51
Page 36
Methodology essentials
organisations’ failure to improve customer satisfaction and is far more widespread than people realise. Many customer satisfaction measures are virtually useless, which is highly regrettable since the fundamental methodology for measuring customer satisfaction has been established for over two decades. Some people may still question the extent to which intangible feelings can be accurately measured, but this is a very outmoded view. Whilst the feelings may be subjective, modern research methods can produce objective measures of them – measures that can be accurately expressed in numbers and reliably tracked over time. Their level of reliability can be accurately stated and they can be used to develop powerful statistical models to help us understand both the causes and consequences of customer satisfaction, as we explain in Chapter 14. The origins of the science of customer satisfaction measurement can be traced back to the mid-1980s and the work of Parasuraman, Zeithaml and Berry. Their SERVQUAL approach15,16,17, developed and refined in the second half of the decade, established a number of key satisfaction measurement principles such as: Measuring subjective perceptions as the basis of user-defined quality. Using exploratory research to identify the criteria used by customers to make service quality judgements prior to a main survey to gather statistically reliable data. The multi-dimensionality of customers’ judgements. The relative importance of the dimensions and the fact that the most important will have the greatest effect on customers’ overall feelings about an organisation. The use of a weighted index to reliably represent customers’ overall judgements. The use of gap analysis to identify areas for improvement. However, aspects of the SERVQUAL model have been heavily criticised in more recent times, especially its authors’ assertion that customers’ judgement of any organisation’s service quality could be reliably measured across five standard dimensions – reliability, assurance, tangibles, empathy and responsiveness (sometimes labelled the Rater scale). There has also been much debate concerning the value of the service quality measures compared with the much wider measure of customer satisfaction. We will address both issues in the next two sections. 3.4.3 Standard versus customer defined requirements Many academic researchers have tested the five SERVQUAL dimensions in their own surveys and have drawn different conclusions concerning both the number and nature of the dimensions. Some have concluded that there should be fewer dimensions18,19,20 whilst others have advocated more than five21,22,23. Sector specific dimensions were proposed24,25 and even the SERVQUAL originators in a later study found the dimensions changing26. In their book “Improving Customer Satisfaction, Loyalty and Profit”27, Michael
Chapter three
5/7/07
09:51
Page 37
Methodology essentials
Johnson and Anders Gustafsson of the University of Michigan Business School took these findings one step further when they introduced the concept of ‘the lens of the customer’, which they contrasted with ‘the lens of the organization’. Suppliers and their customers often do not see things in the same way. Suppliers typically think in terms of the products/services they supply, the people they employ to provide them and the processes that employees use to deliver the product or service. Customers look at things from their own perspective, basing their evaluation of suppliers on whether they have received the results, outcomes or benefits that they were seeking. FIGURE 3.6 The lens of the customer Lens of the organization
people products processes
results
outcomes benefits Lens of the customer
Since customers’ satisfaction judgements are based on the extent to which their requirements have been met, a measure of satisfaction will be generated only by a survey based on the same criteria used by the customers to make their satisfaction judgements. This means that to ask the right questions, customers’ requirements have to be identified before the survey is undertaken and the questionnaire based on ‘the lens of the customer’. KEY POINT An accurate measure of how satisfied or dissatisfied customers feel can be generated only if the survey is based on the lens of the customer. Requirements are identified by qualitative research, a process in which focus groups or depth interviews are used to allow customers to talk about their relationship with a supplier and define what the most important aspects of that relationship are. This process is explained in Chapter 5.
37
Chapter three
38
5/7/07
09:51
Page 38
Methodology essentials
3.4.4 Satisfaction or service quality If any measure of customers’ attitudes is to be a reliable lead indicator of their future behaviour, it is fundamental to its accuracy that the survey instrument is based on the correct requirements. As already explained, much of the early debate around the SERVQUAL methodology focused on the extent to which the five ‘rater’ dimensions were the correct ones, with several academic studies suggesting alternative or more appropriate ones. There have been studies that have demonstrated the organisational value of improving service quality in terms of increasing market share28, margins29, recommendation26 and profitability30,31,32. However, most commentators prefer the much broader concept of customer satisfaction rather than the more restrictive measures of service quality or the prescriptive SERVQUAL framework33,34,35,36,37,38. Clearly, customers normally judge organisations on a wider range of factors than service quality alone – product quality and price being two obvious examples. Before moving on it is necessary to cover two more reasons why organisations’ measures of customer satisfaction may not be providing suitable information for management decision making. The first is caused by insufficient knowledge of research, the second, paradoxically, can result from too much. 3.4.5 Unscientific surveys Some organisations give responsibility for customer satisfaction measurement to people who do a relevant job (e.g. Customer Service Manager or Quality Manager) but who have no experience or expertise in research techniques. Research is a scientific process. It’s not enough to approach the task with good intentions and common sense. Without sufficient training they will make the most basic errors that will render the output totally unsuitable for monitoring the organisation’s success in satisfying its customers. Common problems include asking the wrong questions based on the lens of the organisation (see Chapter 4), introducing bias (Chapters 6 to 9 explain three common sources of bias) and attempting to monitor a measure whose margin of error will be far greater than the amount that customer satisfaction could be expected to rise or fall over a twelve month period. The issue of statistical reliability is explained in Chapter 6 on sampling and in Chapter 11 for calculating an accurate customer satisfaction index. As well as being a complete waste of resources, amateurish customer satisfaction surveys are a key reason why many organisations fail to attain the benefits of improving customer satisfaction. 3.4.6 Misguided professional research There are many people in agencies and in research departments in companies who are very experienced market researchers. They would not make the basic errors outlined above because they are well versed in sample size and confidence intervals,
Chapter three
5/7/07
09:51
Page 39
Methodology essentials
they understand the biasing effect of low response rates and unbalanced scales or questions and they know that a composite index is more reliable than a single question. However, whilst a valid customer satisfaction survey will always be founded on sound research principles, an effective one will also be based on a deep understanding of customer satisfaction. Improving customer satisfaction is very difficult. Maintaining a sustained improvement in customer satisfaction over a few years is exceptionally challenging and will not happen unless the organisation has a customer satisfaction measurement methodology that is totally suited to the task – and many perfectly valid research techniques are not suited to providing data for monitoring and improving customer satisfaction. Rating scales illustrate the point. The market research industry engages in perennial debates about the advantages and disadvantages of different rating scales, such as verbal versus numerical or 5-point versus 10-point scales. A 5-point verbal rating scale is a totally valid research technique that is completely suitable for many forms of market research. For monitoring and improving customer satisfaction, however, it is vastly inferior to a 10point numerical scale. The reasons (explained in Chapter 8) are customer satisfaction rather than market research reasons. Generating reliable customer satisfaction measures that lead most effectively to customer satisfaction improvement requires extensive customer satisfaction knowledge as well as adequate research expertise and few people have both.
Conclusions 1. Customer satisfaction is a relative concept based on the extent to which an organisation has met its customers’ requirements. 2. Customer satisfaction is an attitude based on customers’ subjective perceptions of an organisation’s performance. 3. Loyalty is a behaviour that is driven primarily by customers’ satisfaction attitudes. 4. Many managers don’t understand the extent to which they have to make customers very satisfied, rather than ‘merely satisfied’ to achieve the full organisational benefits of customer satisfaction. 5. By over-emphasising the importance of cost control, many companies make decisions that adversely affect customer satisfaction and, in the long run, customer loyalty and business performance. 6. Measures are essential to effectively manage employees’ behaviours and organisational performance. 7. Many organisations fail to improve customer satisfaction because their measures are based on flawed methodologies. The main essentials of an accurate CSM process are: Using the lens of the customer as the basis for CSM surveys. Measuring the relative importance of customers’ requirements.
39
Chapter three
40
5/7/07
09:51
Page 40
Methodology essentials
Basing the headline measure that is monitored over time on a composite index that is weighted according to the relative importance of its components. Identifying the areas where the organisation is failing to meet its customers’ requirements as the basis for actions to improve customer satisfaction. Customer satisfaction measurement needs to be conducted by specialists, not by amateurs who will make basic research errors or by market researchers who don’t understand enough about the specific demands of a reliable CSM process.
References 1. The American Customer Satisfaction Index www.theacsi.org 2. The Leadership Factor’s customer satisfaction benchmarking database: www.leadershipfactor.com/surveys/ 3. Kotler, Philip (1986) "Marketing Management: Analysis, Planning and Control”, Prentice-Hall International, Englewood Cliffs, New Jersey 4. Swan and Combs (1976) "Product Performance and Customer Satisfaction: A New Concept”, Journal of Marketing, (April) 5. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the consumer”, McGraw-Hill, New York 6. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 7. Helsdingen and de Vries (1999) "Services marketing and management: An international perspective”, John Wiley and Sons, Chichester, New Jersey 8. Peters and Austin (1986) "A Passion for Excellence”, William Collins, Glasgow 9. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 10. Rust, Zahorik and Keiningham (1994) "Return on Quality: Measuring the financial impact of your company’s quest for quality”, McGraw-Hill, New York 11. Jones and Sasser (1995) "Why Satisfied Customers Defect”, Harvard Business Review 73, (November-December) 12. Keiningham and Vavra (2001) "The Customer Delight Principle: Exceeding customers’ expectations for bottom-line success”, McGraw-Hill, New York 13. Reichheld, Markey and Hopton (2000) "The Loyalty Effect – the relationship between loyalty and profits”, European Business Journal 12(3) 14. Taylor, Andy (2003) "Top box: Rediscovering customer satisfaction”, Business Horizons, (September-October) 15. Parasuraman, Berry and Zeithaml (1985) “A conceptual model of service quality and its implications for future research”, Journal of Marketing 49(4) 16. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for measuring perceptions of service quality”, Journal of Retailing 64(1) 17. Zeithaml, Berry and Parasuraman (1990) "Delivering Quality Service”, Free Press, New York
Chapter three
5/7/07
09:51
Page 41
Methodology essentials
18. Babakus and Boller (1992) "An empirical assessment of the SERVQUAL scale”, Journal of Business Research 24 19. Cronin and Taylor (1992) "Measuring service quality: An examination and extension”, Journal of Marketing 56 20. White and Schneider (2000) "Climbing the Commitment Ladder: The role of expectations disconfirmation on customers' behavioral intentions”, Journal of Service Research 2(3) 21. Gronroos, C (1990) "Service management and marketing: Managing the moments of truth in service competition”, Lexington Books 22. Carman, J M (1990) "Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions”, Journal of Retailing 66(1) 23. Gummesson, E (1992) "Quality dimensions: What to measure in service organizations”, in Swartz, Bowen and Brown, (Eds) "Advances in services marketing and management”, JAI Press, Greenwich CT 24. Stevens, Knutson and Patton (1995) "DINESERV: A tool for measuring service quality in restaurants”, Cornell Hotel and Restaurant Administration Quarterly 36(2) pages 56-60 25. Dabholkar, Thorpe and Rentz (1996) "A measure of service quality for retail stores: Scale development and validation”, Journal of the Academy of Marketing Science 24(1) 26. Parasuraman, Berry and Zeithaml (1991) "Refinement and reassessment of the SERVQUAL scale”, Journal of Retailing 79 27. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, John Wiley and Sons, San Francisco, California 28. Buzzell and Gale (1987) "The PIMS Principles: Linking Strategy to Performance”, Free Press, New York 29. Gummesson, E (1993), "Quality Management in Service Organisations”, ISQA, Stockholm 30. Narver and Slater (1990) "The effect of market orientation on business profitability”, Journal of Marketing 54 31. Schneider, B (1991) "Service quality and profits: Can you have your cake and eat it too?”, Human Resources Planning 14(2) 32. Deshpande, Farley and Webster (1993) "Corporate culture, customer orientation and innovativeness in Japanese firms: A quadrad analysis”, Journal of Marketing 57 33. Gilmore and Carson (1992) "Research in service quality: Have the horizons become too narrow?”, Marketing Intelligence and Planning 10(7) 34. Lam, S S K (1995) "Assessing the validity of SERVQUAL: an empirical analysis in Hong Kong”, Asia Pacific Journal of Quality Management 4(4) 35. Buttle, F (1996) "SERVQUAL, review, critique, research agenda”, Journal of Marketing 60
41
Chapter three
42
5/7/07
09:51
Page 42
Methodology essentials
36. Genestre and Herbig (1996) "Service expectations and perceptions revisited: adding product quality to SERVQUAL”, Journal of Marketing Theory and Practice 4(4) 37. Robinson, S (1999) "Measuring service quality: Current thinking and future requirements”, Marketing Intelligence and Planning 17(1) 38. Newman, K (2001) "Interrogating SERVQUAL: A critical assessment of service quality measurement in a high street retail bank”, International Journal of Bank Marketing 19(3) 39. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other measurement issues”, American Marketing Association, Chicago, Illinois
Chapter four
5/7/07
09:52
Page 43
Asking the right questions
CHAPTER FOUR
Asking the right questions To say that you need to ask the right questions when undertaking a customer satisfaction survey may appear to be a statement of the obvious. Unfortunately, when it comes to customer satisfaction measurement, it’s the biggest single mistake organisations make. Many simply don’t ask the right questions, even though they often devote considerable time and effort to deciding what the questions should be. That’s because they approach the task from the inside out, looking at it through the ‘lens of the organisation’ rather than how they should, from the outside in, seeing it through the ‘lens of the customer’. Since customers’ requirements and their relative importance form such a fundamental part of an effective CSM process this chapter is devoted to providing a full examination of the subject.
At a glance In this chapter we will: a) Illustrate why customer satisfaction surveys have to be based on the lens of the customer. b) Explain why it is so crucial to develop an accurate understanding of the relative importance of customers’ requirements. c) Examine the differences between stated and derived measures of importance. d) Compare different techniques for producing statistically derived measures of importance. e) Draw conclusions on the best way to understand customers’ requirements and their relative importance.
4.1 The lens of the customer Many organisations assume that designing a questionnaire for a customer survey is easy. They might arrange a meeting attended by a few managers who, between them, suggest a list of appropriate topics for the questionnaire. There are two problems with this approach. Firstly, the questionnaire almost always ends up far too long because managers tend to keeping thinking of more topics on which customer feedback
43
Chapter four
44
5/7/07
09:52
Page 44
Asking the right questions
would be useful or interesting. The second, and more serious problem is that the questionnaire invariably covers issues of importance to the company’s managers rather than those of importance to customers. This is fine if the objective is simply to understand customers’ perceptions of how the organisation is performing in the specified areas, but it will not provide a measure of customer satisfaction. This fundamental misunderstanding of how to arrive at the right questions is perfectly illustrated by the following example. Forget the free coffee. Just make the trains arrive on time. In 1999 Which? Magazine published an article entitled “Off the rails”1. It illustrated how CSM surveys can be hijacked by organisations (or managers within them). According to Which?, the train operating companies’ “surveys are close to useless” because the questions avoid customers’ main requirements. If an organisation really wants to know how satisfied its customers feel, the questions asked in the survey have to cover the same criteria that customers use to judge the organisation. Companies are tempted to include questions on areas where they’ve invested heavily or made improvements, but if these are of marginal importance to customers they will make little impact on how satisfied customers really feel. Which? conducted a survey to identify rail passengers’ main requirements and the top 10 are shown in the chart. Not a single train operator included all of the customers’ top ten priorities on its questionnaire, questions about the punctuality of the trains being particularly conspicuous by their absence. The worst culprit at the time was GNER, whose survey covered only one item from the top ten criteria on which customers were judging it. They did ask about the on-train catering, and about staff appearance. Both came close to the bottom of customers’ priorities in the Which? survey. FIGURE 4.1 Passengers’ main requirements Punctuality of trains Availability of seats Train frequency Information on delayed and cancelled trains Cleanliness of trains New rolling stock Safety and security on trains Cancellations Announcements on trains Journey time
Chapter four
5/7/07
09:52
Page 45
Asking the right questions
Since customers’ satisfaction feelings are based on the extent to which their requirements have been met, a measure that truly reflects how satisfied or dissatisfied customers feel will be generated only by a survey based on the same criteria used by the customers to make their satisfaction judgements. This means that to ask the right questions, customers’ requirements have to be identified before the survey is undertaken and the questionnaire based on what’s important to customers rather than what’s important to the organisation. In Chapter 3 we introduced the concept of the ‘lens of the customer’, first articulated by Michael Johnson and Anders Gustafsson from the University of Michigan2. It is based on the fact that suppliers and their customers often do not see things in the same way. Suppliers typically think in terms of the products they supply, the people they employ to provide them and the processes that employees use to deliver the product or service. Customers look at things from their own perspective, basing their evaluation of suppliers on whether they have received the results, outcomes or benefits that they were seeking. KEY POINT To produce accurate measures of customer satisfaction, surveys have to be based on what’s important to customers.
4.2 Understanding what’s important to customers Basing a customer satisfaction survey on the lens of the customer produces an accurate measure of how satisfied or dissatisfied customers feel because it employs the same criteria that the customers use to make that judgement. They are the customers’ most important requirements – the things that matter most to them. To achieve this we have to introduce an added layer of complexity, because understanding what is important to customers is not as simple as it may appear. In fact, market researchers have debated this topic more than almost any other aspect of CSM methodology, especially the relative merits of stated or direct measures of importance versus derived or indirect methods. To a large extent the debate has been justifiably fuelled by the fact that different methods of measuring importance have been shown to produce results that can differ very widely3,4,5. Yet, as Myers6 points out, getting this right “is arguably the single most important component of a customer satisfaction survey” for three reasons: 1) It ensures that the survey does not include anything that is not important to customers and does not influence their satisfaction judgement. 2) It provides the basis for identifying PFIs (priorities for improvement) – areas where the organisation should focus its resources for maximum gain in customer satisfaction (see Chapter 12). 3) It enables the calculation of an accurate headline measure of customer satisfaction for tracking purposes – a composite customer satisfaction index
45
Chapter four
46
5/7/07
09:52
Page 46
Asking the right questions
that is weighted according to what’s most important to customers. Since customers base their judgement of suppliers more heavily on the factors that are most important to them, a weighted index provides the only accurate measure of monitoring the organisation’s success in improving customer satisfaction (see Chapter 11).
4.3 Stated importance The simplest way to understand what’s important to customers is to ask them. One could ask them to simply say what’s important to them, starting with a blank sheet of paper and with no prompts provided. Called ‘elicitation’3,5, it is the ideal starting point for beginning the process of understanding the lens of the customer, it is a straight forward thing for customers to do and it is easy to analyse by simply counting the number of times each requirement was mentioned. However, in a totally unprompted process customers will articulate only a small number of requirements, which means that an extremely large number of interviews have to be conducted for elicitation to stand any chance of uncovering the full extent of customers’ requirements. Even then, elicitation does no more than produce a list of requirements. It does not provide a measure of relative importance. Therefore, it would be more usual to provide prompts or other stimulus material to build a much more comprehensive list of customer requirements and then to understand their relative importance by asking customers to score the importance of the list, preferably on a 10 point scale, where 10 out of 10 means ‘extremely important’ and 1 out of 10 means ‘of no importance at all’. KEY POINT Stated importance provides a measure of the relative importance of customers’ requirements that will be easily understood by all managers and employees. Called stated (or direct) importance, the average scores generated by this exercise will provide a very clear and reliable view of the relative importance of customers’ priorities, as seen by the customers themselves. It is a very clear and simple process for customers to follow when they are interviewed and for colleagues to understand when the results are presented. However, stated importance has been criticised on two counts: 4.3.1 High stated importance scores Firstly customers have a tendency to give fairly high importance scores, although this simply reflects reality – many things are very important to customers. Even though the range of average importance scores will be at the upper end of the scale, there will be a range. Some commentators maintain that stated importance scores always fall in a very narrow range (average scores above 8 out of 10). If this happens, it suggests that the questions have not been properly administered. If the correct procedures are followed (see Chapter 5.1), a very wide range of average importance scores will be
Chapter four
5/7/07
09:52
Page 47
Asking the right questions
generated by qualitative exploratory research, often from a high of almost 10 to a low of less than 3 on a 10 point scale although less suitable scales such as verbal scales or 5-point scales will provide a much narrower range of scores. Using a 10-point scale even the main survey, which only includes customers’ most important requirements, will typically produce stated importance scores from around 7 to almost 10. Since the purpose of this exercise is to understand the relative importance of customers’ priorities, the average scores will clearly highlight which of the requirements (all of which are important), customers see as the real top priorities. Moreover, if any requirements record importance scores below 6 out of 10 on the main survey or on a quantitative exploratory survey (see Chapter 5), it provides conclusive evidence that they are not very important to customers and should not form part of any measure of customer satisfaction. KEY POINT Stated importance is sometimes said to produce blanket high scores, providing little discriminatory power for understanding relative importance. However, if the correct scale is used, this criticism is greatly exaggerated. 4.3.2 Givens The second criticism of stated importance is that customers tend to emphasise certain things when scoring the requirements, typically givens such as safety, price, cleanliness etc. Consider, for example, your own judgement as an airline passenger. If you were surveyed and asked to score out of 10 the importance of safety, you would almost certainly give it the top score. However, if you recall the basis on which you chose that airline, it’s very unlikely that safety was high on your list of selection criteria. It’s a ‘given’. Under normal circumstances, safety is not a factor that differentiates between airlines. Therefore, in order to fully understand the criteria used by customers to select or evaluate suppliers, it is helpful to also use the second way of measuring what’s important to customers, known as impact.
4.4 Impact 4.4.1 Determinance The concept of givens and differentiators goes back to the 1960s7,8,9, and is sometimes referred to as ‘determinance’. It is now well established that some things that are very important to customers won’t always make a big impact on how they judge an organisation because they are givens. Sometimes misleadingly called ‘derived importance’, impact essentially highlights the things that are ‘top of mind’ for customers – the factors that ‘determine’, or make a big impact on how they select and evaluate suppliers. Imagine that we approached, at random, a customer of an organisation and asked them for a quick view: “overall is XYZ a good company to do business with?” Unless
47
Chapter four
09:52
Page 48
Asking the right questions
it operates in a very restricted market, any organisation will be exposed to this kind of ‘word of mouth’ all the time. In that situation, are its customers saying good or bad things about it, and what is making them say those things? The answer is that their testimonial, good or bad, will have been based on the aspects of dealing with the organisation that have made the biggest impact on them. The measure we are about to describe will highlight those factors. 4.4.2 Measuring impact We are trying to identify the aspects of an organisation’s performance that are most closely associated with customers’ overall judgement / opinion of it. Conveniently, there is a statistical technique called correlation that does just this. To utilise this technique, a customer satisfaction questionnaire must contain a simple overall satisfaction question such as: “Taking everything into account, how satisfied or dissatisfied are you overall with……… XYZ?” The overall satisfaction question must be scored on exactly the same scale used for all the other satisfaction questions – preferably a 10-point numerical scale. The data from the overall satisfaction question is then correlated against the customers’ satisfaction scores for each of the other requirements. This can be easily executed in any statistical package. In Microsoft Excel, for example, it is ‘CORREL’ on the drop-down formula menu. The output of a correlation is a ‘correlation coefficient’, which is always a number between 0 and 1 (or 0 and -1 for a negative correlation). To utilise the impact measure, the only statistical knowledge required is a basic understanding of what the correlation coefficient means. This is illustrated by the examples in Figures 4.2 and 4.3. FIGURE 4.2 Low correlation 10
Customer X
9 8
Overall Satisfaction
48
5/7/07
7 6 5 4 3 2 1
Customer Y 1
2
3
4
5
6
Staff Appearance
7
8
9
10
Chapter four
5/7/07
09:52
Page 49
Asking the right questions
In the hypothetical example shown in Figure 4.2, 20 customers have scored their overall satisfaction with the supplier and their satisfaction with staff appearance, which seems to make little impact on customers’ overall satisfaction, achieving a very low correlation coefficient of 0.1. Close examination of the scatter plot clearly shows why this is the case. Customer Y does not like the supermarket (scoring 2 out of ten for overall satisfaction), but has no problem with staff appearance (giving it a very high score of 9 out of 10). By contrast, customer X rates the supermarket as a whole very highly, giving it an overall satisfaction score of 9 out of 10 despite having a very poor opinion of staff appearance, scoring it 2 out of 10. From that picture, (and given an adequate sample size for statistical reliability), we can draw the conclusion, or ‘derive’, that staff appearance makes very little impact on customers’ overall judgement of the supplier. Statistically, the correlation co-efficient of 0.1 tells us that the two variables have virtually no relationship with each other. On the other hand, figure 4.3 shows that staff helpfulness achieves an extremely high correlation coefficient of 0.9, which means that it has a very strong relationship with overall satisfaction. The scatter plot of the 20 imaginary customers that have take part in the survey shows that each one gives a very similar score for their satisfaction with staff helpfulness and their overall satisfaction with the supplier. There are no customers who think the supplier is very good even though the staff are unhelpful, or vice-versa. We can therefore conclude that staff helpfulness makes a very high impact on customers’ overall judgement of that supplier. FIGURE 4.3 High correlation 10 9
Overall Satisfaction
8 7 6 5 4 3 2 1
1
2
3
4
5
6
7
8
9
10
Staff Helpfulness
KEY POINT High impact scores reflect factors at the top of customers’ minds when they think of an organisation.
49
Chapter four
50
5/7/07
09:52
Page 50
Asking the right questions
It is very unusual to produce such a wide range of correlation coefficients in a real customer satisfaction survey. A more typical range is shown in Figure 4.4, which compares the importance and impact scores generated by a survey of restaurant customers. It demonstrates how some requirements, e.g. ambience, décor, and price in this example, were not scored particularly highly for importance by customers but were making a bigger impact on their overall judgement of the restaurant. Conversely there can be requirements that are scored highly for stated importance, cleanliness of the toilets being a good example in this case, that actually make little difference to customers’ overall judgement of the supplier – a classic given. In Chapter 5 we will explain how to use stated importance and impact in an exploratory survey to make absolutely certain that the main survey asks the right questions. FIGURE 4.4 Importance and impact scores Importance
Impact
Cleanliness of the tableware
9.16
0.49
Cleanliness of the toilets
8.91
0.30
Cleanliness of the restaurant
8.88
0.48
Quality of food
8.84
0.63
Professionalism of staff
8.40
0.59
Friendliness of staff
8.32
0.54
Welcome on arrival
8.16
0.40
Ambience
8.13
0.53
Air quality
8.02
0.40
Availability of food
7.40
0.37
Seating
7.29
0.37
Choice of food
7.27
0.42
Décor
6.86
0.45
Layout of the restaurant
6.84
0.38
Price of food
6.82
0.48
Customer requirement
4.5 Bivariate and multivariate techniques 4.5.1 Correlation Whilst the use of statistical techniques to derive what is important to customers (‘indirect’ methods) is widely advocated in the CSM literature2,6,10,11,12, there is less agreement on the best technique to use. In section 4.4 we explained the use of correlation to derive importance, and the impact data shown in Figure 4.4 were calculated using a Pearson’s correlation coefficient. However, other statistical techniques are used by some CSM practitioners for calculating derived importance; most commonly multiple regression. A bivariate correlation, such as Pearson’s Product Moment Correlation, involves
Chapter four
5/7/07
09:52
Page 51
Asking the right questions
correlating each requirement separately against overall satisfaction. This provides a very accurate measure of the extent to which each individual attribute co-varies with overall satisfaction as illustrated in Figures 4.2 and 4.3. A high correlation coefficient, as recorded by ‘quality of the food’ in Figure 4.4, indicates that the satisfaction scores given for ‘quality of the food’ and the scores given for the overall satisfaction question contain a large amount of shared information. This is illustrated in Figure 4.5. The actual amount of shared information can be quantified by squaring the correlation coefficient, which is expressed r2. So in the case of ‘quality of the food’, the coefficients would be: r = 0.63 r2 = 0.40 In other words, 40% of the information in ‘quality of food’ is shared with the information in ‘overall satisfaction’. FIGURE 4.5 Shared information
a Quality of the food
b Shared information
c Overall satisfaction
4.5.2 Collinearity A characteristic of CSM data is that surveys are based on a number of customer requirements, many of which are quite similar to each other. Consequently, there is shared information amongst the requirements as well as between each requirement and overall satisfaction. This phenomenon is known as collinearity and is demonstrated in Figure 4.6, the correlation matrix for the restaurant survey. This shows that requirements which are different aspects of the same topic, e.g. the three cleanliness attributes, correlate quite strongly with each other. So ‘cleanliness of the restaurant’ correlates highly with both ‘cleanliness of the tableware’ and ‘cleanliness of the toilets’, and there is quite a lot of correlation across many of the requirements. KEY POINT Collinearity means that information is shared across attributes and is a characteristic of CSM data. Since it only compares one attribute at a time with overall satisfaction, a bivariate
51
Chapter four
09:52
Page 52
Asking the right questions
correlation completely ignores all this collinearity, as illustrated in Figure 4.7. As we know, ‘quality of food’ correlates with ‘overall satisfaction’ (ellipse A) as does ‘choice of food’ (ellipse B) and they also share information with each other (ellipse C). Since a bivariate correlation is oblivious of everything outside the information contained in the two variables it is comparing, it double counts the shaded area (D), including it in the coefficient for ‘choice of food’ as well as ‘quality of food’.
Overall satisfaction Quality of food Price of food Availability of food Choice of food Cleanliness of the tableware Cleanliness of the toilets Cleanliness of the restaurant Air quality Professionalism of staff Welcome on arrival Friendliness of staff Ambience Layout of the restaurant Seating Décor
Décor
The seating
The layout of the restaurant
Friendliness of staff
The ambience
The welcome on arrival
Professionalism of staff
Air quality
Cleanliness of the restaurant
Cleanliness of the toilets
Cleanliness of the tableware
Choice of food
Availability of food
Price of food
Quality of food
FIGURE 4.6 Correlation matrix
Overall satisfaction
52
5/7/07
0.63 0.48 0.37 0.42 0.49 0.30 0.48 0.40 0.59 0.40 0.54 0.53 0.38 0.37 0.45 0.63
056 0.34 0.32 0.43 0.18 0.34 0.21 0.43 0.22 0.35 0.29 0.26 0.21 0.26
0.48 0.56
0.34 0.37 0.34 0.14 0.31 0.31 0.32 0.30 0.37 0.30 0.20 0.22 0.25
0.37 0.34 0.34
0.47 0.43 0.27 0.41 0.33 0.36 0.25 0.32 0.22 0.30 0.30 0.32
0.42 0.32 0.37 0.47
0.43 0.28 0.38 0.24 0.30 0.26 0.34 0.22 0.26 0.24 0.27
0.49 0.43 0.34 0.43 0.43
0.46 0.62 0.43 0.47 0.32 0.41 0.32 0.35 0.38 0.33
0.30 0.18 0.14 0.27 0.28 0.46
0.56 0.32 0.29 0.18 0.28 0.16 0.21 0.24 0.31
0.48 0.34 0.31 0.41 0.38 0.62 0.56
0.48 0.41 0.33 0.40 0.35 0.39 0.39 0.47
0.40 0.21 0.31 0.33 0.24 0.43 0.32 0.48
0.32 0.23 0.23 0.25 0.30 0.40 0.36
0.59 0.43 0.32 0.36 0.30 0.47 0.29 0.41 0.32
0.48 0.63 0.33 0.25 0.25 0.27
0.40 0.22 0.30 0.25 0.26 0.32 0.18 0.33 0.23 0.48
0.67 0.34 0.20 0.29 0.19
0.54 0.35 0.37 0.32 0.34 0.41 0.28 0.40 0.23 0.63 0.67
0.42 0.25 0.26 0.20
0.53 0.29 0.30 0.22 0.22 0.32 0.16 0.35 0.25 0.33 0.34 0.42
0.40 0.31 0.32
0.38 0.26 0.20 0.30 0.26 0.35 0.21 0.39 0.30 0.25 0.20 0.25 0.40
0.61 0.39
0.37 0.21 0.22 0.30 0.24 0.38 0.24 0.39 0.40 0.25 0.29 0.26 0.31 0.61
0.42
0.45 0.26 0.25 0.32 0.27 0.33 0.31 0.47 0.36 0.27 0.19 0.20 0.32 0.39 0.42
KEY POINT A bivariate correlation ignores the collinearity amongst the customer requirements. Since the collinearity is multiplied across many requirements, as shown in Figure 4.5, there is a lot of double counting going on. For this reason some CSM commentators claim that correlation is an inappropriate technique for customer satisfaction data13, arguing that it is necessary to use a multivariate technique such as multiple regression.
Chapter four
5/7/07
09:52
Page 53
Asking the right questions
FIGURE 4.7 Correlation and collinearity Quality of food
Overall satisfaction
A C
D B
Choice of food
4.5.3 Multiple regression Multiple regression simultaneously looks at the information that all the requirements share with overall satisfaction. This removes the collinearity problem by eliminating all double counting, but often produces a very different outcome, as shown in Figure 4.8. FIGURE 4.8 Coefficients from correlation and multiple regression Correlation
Multiple regression
Quality of food
0.63
0.35
Price of food
0.48
0.07
Availability of food
0.37
-0.01
Choice of food
0.42
0.07
Cleanliness of the tableware
0.49
0.02
Cleanliness of the toilets
0.30
0.01
Cleanliness of the restaurant
0.48
0.01
Air quality
0.40
0.07
Professionalism of staff
0.59
0.20
Welcome on arrival
0.40
-0.04
Friendliness of staff
0.54
0.17
Ambience
0.53
0.16
Layout of the restaurant
0.38
-0.01
Seating
0.37
0.02
DĂŠcor
0.45
0.10
Customer requirement
All the multiple regression scores (known as beta coefficients) are lower than their corresponding correlation coefficients simply because the double counting has been
53
Chapter four
54
5/7/07
09:52
Page 54
Asking the right questions
eliminated. Closer inspection, however, reveals some major differences in the relative importance implied by the two columns of data. The beta coefficients suggest that none of the cleanliness requirements makes any difference to overall satisfaction, and the same for many other attributes such as ‘availability of food’, ‘welcome on arrival’, ‘layout of the restaurant’ and ‘seating’. Moreover, three requirements show negative correlations, albeit only tiny ones, suggesting that, to take one of the examples, the less pleasant the welcome on arrival, the more the customers like it – a clearly nonsensical conclusion to draw. The reason why this happens is illustrated in Figure 4.9. FIGURE 4.9 Multiple regression and collinearity Quality of food
Overall satisfaction
A B C
Choice of food
Correlations represent the unique contribution made by each requirement to overall satisfaction. As we saw in Figure 4.7, ‘quality of food’ and ‘choice of food’ co-vary, but since the Pearson’s correlation relates them individually to overall satisfaction, it obviously double counts any shared information, which therefore appears in the coefficients for both requirements. Multiple regression does no double counting since it identifies the incremental contribution to overall satisfaction of each requirement when combined with all the remaining requirements. Unlike correlation, it cannot allocate area B in Figure 4.9 to both quality and choice of food so reflects only the incremental contribution made by each requirement in the beta coefficients. As shown in Figure 4.8, the incremental contribution made by ‘quality of food’ to explaining overall satisfaction is far greater than that made by ‘choice of food’. Consequently, the relative impact of some requirements is over-emphasised whilst that of many others is heavily under-stated. In the real world, of course, customers at the restaurant do not consider the incremental amount of satisfaction they derive from each of the 15 requirements whilst holding all the others constant! KEY POINT Multiple regression eliminates the collinearity, but in doing so under-states the impact of many of the requirements.
Chapter four
5/7/07
09:52
Page 55
Asking the right questions
Some market researchers have been very attracted to multiple regression because of its focus on a small number of so-called ‘key drivers’. However, the process it uses to eliminate collinearity often distorts the results – cleanliness and choice of food are important to people dining out and do influence their judgement of the restaurant. The measures provided by correlation therefore more accurately reflect the relative impact made by the attributes in customers’ minds. For this reason we do not recommend the use of multiple regression to derive measures of impact or importance in CSM, a view supported by other commentators such as Myers6 and Szwarc14. As Myers points out6, “Multiple regression coefficients can be distorted if collinearity among attributes is high (as it often is). The problem here is that multiple regression coefficients can be very misleading because one attribute will get a high coefficient while a very similar one will get a much lower coefficient.” KEY POINT By allocating shared information to only one requirement, key driver analysis produced by multiple regression often produces misleading conclusions. 4.5.4 The complete picture Gustafsson and Johnson found that stated importance, compared to statistically derived measures, correlated relatively more strongly with loyalty than satisfaction, supporting the view that stated importance provides a more stable measure and vital information on the longer-term drivers of customers’ behaviours15. On the other hand, correlation offers a good reflection of the issues that are currently ‘top of mind’ with customers and should therefore be seen as a measure of impact rather than importance14. They’re simply measures of different things, so the best understanding of customers’ requirements and their relative importance is produced by using both measures. In the next chapter we will explain how this will help to ensure that the main survey questionnaire asks the right questions and in Chapter 10 how it should be used in the analysis of the main survey.
4.6 Conclusions 1. If a CSM process is to provide an accurate measure of how satisfied or dissatisfied customers feel, it must be based on the ‘lens of the customer’, using the same criteria that the customers use to judge the organisation. 2. Stated importance is a clear and simple measure of what customers say is important. Whilst it is heavily criticised by some for emphasising givens, it is not only the most accurate measure of what is important to customers, it is the only one. 3. All statistically derived measures reflect impact rather than importance, and of these, correlation provides the best indication of the extent to which the different attributes are currently influencing customers’ judgement of the supplier.
55
Chapter four
56
5/7/07
09:52
Page 56
Asking the right questions
4. Impact measures or “key drivers” are changeable, reflecting current problems rather than actual importance. Stated importance provides a more stable measure. 5. For a full understanding of ‘the lens of the customer’ organisations should therefore use both stated importance and impact for CSM.
References 1. Which? Magazine (1999) "Off the rails”, (January pages 8-11) 2. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, John Wiley and Sons, San Francisco, California 3. Jaccard, Brinberg and Ackerman (1986) "Assessing Attribute Importance: A comparison of six methods”, Journal of Consumer Research 12 (March) 4. Heeler, Okechuku and Reid (1979) "Attribute Importance: Contrasting measurements”, Journal of Marketing Research 8, (August) 5. Griffin and Hauser (1993) "The Voice of the Customer” Marketing Science 12, (Winter) 6. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other measurement issues”, American Marketing Association, Chicago, Illinois 7. Foote, Nelson (1961) "Consumer Behavior: Household Decision Making Volume 4”, New York University Press, New York 8. Myers and Alpert (1968) "Determinant Buying Attributes: Meaning and Measurement”, Journal of Marketing 32, (October) 9. Alpert, Mark (1971) "Identification of Determinant Attributes – A Comparison of Methods”, Journal of Marketing Research 8, (May) 10. Cronin and Taylor (1992) "Measuring service quality: An examination and extension”, Journal of Marketing 56 11. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for measuring perceptions of service quality”, Journal of Retailing 64(1) 12. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 13. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ Quality Press, Milwaukee 14. Gustafsson and Johnson (2004) "Determining Attribute Importance in a Service Satisfaction Model”, Journal of Service Research 7(2)
Chapter five
5/7/07
09:53
Page 57
Exploratory research
CHAPTER FIVE
Exploratory research As we said in the last chapter, an accurate measure of customer satisfaction will be produced only if the survey is based on ‘the lens of the customer’. To do this, it is essential to talk to customers before the start of the survey to find out what’s important to them. This is called exploratory research. The questionnaire for the main survey will then be based on the things that are most important to customers.
At a glance In this chapter we will a) Introduce the concept of qualitative research b) Explain how to conduct depth interviews for CSM c) Explain how to conduct focus groups for CSM d) Outline the advantages of an exploratory survey e) Consider how often to repeat exploratory research
5.1 Qualitative research Beginning the exercise with ‘exploratory research’ is widely advocated. Most commonly, exploratory research will be qualitative1,2,3. Qualitative research involves getting a lot of information from a small number of customers. Lots of information is needed because at this stage an in-depth understanding of what’s important to customers is essential for including the right questions on the main survey questionnaire. To achieve this it is important to get respondents talking in detail about their experiences, their attitudes and, especially in consumer markets, their emotions and feelings4,5. So, from qualitative research a large amount of in-depth information is gathered from each customer producing a lot of understanding. However, since it involves only small sample sizes it’s not quantitative so it’s not possible to draw statistical inferences from it. This is of no concern since the only purpose of exploratory research is to understand customers’ requirements sufficiently well to ask the right questions in the main survey. It is the main survey, which is undertaken with a larger sample and is quantitative, that establishes statistically reliable measures of importance as well as satisfaction. However, it can
57
Chapter five
09:53
Page 58
Exploratory research
also be useful to include a quantitative element at the exploratory stage before the questionnaire for the main survey or ongoing tracking is finalised1,6,7, and this option will also be examined later in the chapter. Initially, however, we will examine the two main qualitative exploratory research techniques; depth interviews and focus groups.
5.2 Depth interviews Advocated by Johnson and Gustafsson8, depth interviews are usually face to face and one to one. The duration of a depth interview can range from 30 to 90 minutes depending on the complexity of the customer – supplier relationship. Depth interviews are more commonly used in business to business markets, where the customers are other organisations so we will describe the depth interview process mainly in that context. FIGURE 5.1 Sample size in exploratory reseach9 100 90 80 % of needs identified
58
5/7/07
70 60 50 40
Face to face Focus groups
30 20 10 0
0
2
4
6 8 10 12 14 16 18 20 22 Number groups or face to face interviews
24
26
28
30
Due to the qualitative nature of exploratory research and to the relatively low variance in customers’ requirements, sample sizes do not need to be large. Around 12 depth interviews are typically adequate for CSM exploratory research in a B2B market. As illustrated in Figure 5.1, Griffin and Hauser9 identified that 12 depth interviews will identify at least 90% of customers’ requirements. A very small customer base might need fewer. A large and complex customer base would need more interviews to ensure a good mix of different types of customers such as: High value and lower value customers Customers from different business sectors Customers from different channels such as manufacturers and distributors Different geographical locations A range of people from the DMU (decision making unit).
5.2.1 Who is the customer? It has been established for many years that in all but the smallest organisations,
Chapter five
5/7/07
09:53
Page 59
Exploratory research
supplier selection decisions are not normally made by a single individual but by a ‘DMU’, sometimes called a ‘buying center’10,11,12. As far as customer satisfaction is concerned, there will be several individuals who are affected in some way by a supplier’s product or service and they will communicate with each other, formally or informally, to determine the customer’s level of satisfaction as well as various loyalty decisions. The members of the DMU are typically identified in terms of the roles they play since job titles vary enormously across organisations. The five traditional DMU roles are13: Buyers Typically in the purchasing department, usually place the orders and often have the most contact with suppliers. Users The main recipients and users of the product or service, e.g. production managers in a manufacturing setting. Deciders The ultimate decision maker could be anybody from the CEO to another member of the DMU to a whole committee. Influencers Although ‘deciders’ are often seen as making the decision, they may be heavily influenced by other colleagues, especially those with technical knowledge such as engineers. Gatekeepers Control the flow of information, often shielding DMU members from suppliers. Exacerbated by gatekeepers, suppliers’ knowledge of DMU members may be incomplete14,15 and this can be a problem for the depth interview and main survey sampling process. The fact is, there is no standard DMU composition and their size can vary widely from 2 or 3 individuals to 15-20, though 5 to 6 is the most common size16. The relative influence of DMU members is also highly variable17. The exploratory research, and later the main survey, must reach the full spectrum of individual roles in the DMU if it is going to be accurate. For qualitative exploratory research, it is valid to select the participants (organisations and individuals) using ‘judgmental sampling’18 (using good judgment to ensure that the small sample is representative of the business). For quantitative surveys sampling will need to be much more sophisticated, and this is covered in Chapter 6. The fundamental purpose of exploratory research is therefore to generate a list of customers’ most important requirements to use as the basis for the main survey questionnaire. To achieve this, a depth interview should be conducted according to a carefully defined process, which is outlined in the next four sub-sections of this chapter. 5.2.2 Indirect questioning A depth interview is not a conventional interview where the interviewer asks a sequence
59
Chapter five
60
5/7/07
09:53
Page 60
Exploratory research
of short questions each followed by short answers. To achieve the objectives listed above, it is essential to get the customer talking as much as possible, so it is most effective to think in terms of asking indirect questions, such as the following ‘decision process’ question: “I’d like you to imagine that you didn’t have a supplier of (product/service) and you had to start with a blank piece of paper and find one. I wonder if you could talk me through what would happen in this organisation from the first suggestion that you might need a supplier of (product/service) right through to the time when a supplier has been appointed and evaluated. As you talk me through the process, perhaps you could also point out the different people in your company who might get involved at different times. What sort of things would each of those people be looking for from this supplier and what roles would they each play in the decision making process?” A more sophisticated approach to the elicitation method described in Chapter 4.3, this is not a question that will be answered briefly. Indeed, in some organisations it will stimulate a complicated explanation that may continue for 15 or 20 minutes. While the respondent is talking through this process, the interviewer should jot down everything that seems to be important to anybody in the DMU - a list of customer requirements. Once the customer has outlined everything he can think of, it is perfectly valid to prompt with any factors that may be important to customers, but have not been mentioned. The key thing is not to lead the respondent19. KEY POINT Ask indirect questions to maximise the amount of information provided by customers. An alternative indirect question that is simple but effective is to ask customers to describe their ideal supplier. This could involve the customer talking through an imaginary guided tour around this perfect supplier’s organisation. Another approach is to ask customers to imagine that they are about to leave their job. They must therefore brief someone about everything they would need to know to manage the company’s supply of (product or service in question), including meeting the requirements of all their internal customers. This is a very effective way of understanding the composition of the DMU plus the criteria that its members will be using to judge the supplier. 5.2.3 Customer requirements After some prompting the depth interview will have generated a very comprehensive list of things that are important to the customer, and it might be a very long list; often sixty or more customer requirements in a B2B market, which is far too many for the main survey questionnaire. From this long list it is therefore necessary to identify the things that are most important to customers. One way is to simply ask them by going down the list and asking the customer to rate each item for importance. The trouble
Chapter five
5/7/07
09:53
Page 61
Exploratory research
with this approach is that virtually everything ends up being important to customers. It is more productive to use a ‘forced trade-off’ approach to provide a much more accurate indication of the relative importance of a list of requirements. There are proprietary trade-off techniques such as Conjoint Analysis, but the problem with many of these is that they rely on making a large number of trade offs which get more than a little tedious if 50 or 70 customer requirements have been suggested! For CSM exploratory research it is therefore necessary to use a technique such as the ‘top priority trade off’ that provides the required degree of accuracy but can be administered in the 20 to 30 minutes that will now remain from a one hour depth interview. 5.2.4 Top priority trade off Customers must initially select their top priority, if there was only one thing they could select from the entire list of requirements. Then the customer gives their top priority a score out of 10 for its importance to them, where 10 is extremely important and 1 is of no importance at all. They will almost invariably score 10 since it is their top priority. (A ten point numerical rating scale is only one of many scales used in market and social research. A full assessment of rating scales is provided in Chapter 8.) Having established a scale, customers can then be asked to score everything else for importance compared with their top priority. Using this trade off approach will provide a more accurate reflection of the relative importance of each item and will generate a far wider range of scores than simply going down the list without establishing the ‘top priority’ benchmark. Once the depth interviews are completed it is a simple task to total the scores given for each requirement. Those achieving the highest scores are the most important customer requirements, and these are the items that should be included on the questionnaire for the main survey. For a reasonable length questionnaire, around 1520 customer requirements for the main survey would be normal, but this topic will be covered in more detail in Chapter 9.
5.3 Focus groups Some organisations prefer using focus groups for qualitative research since the group dynamics often stimulate more customer requirements than would be generated by one-toone depth interviews. Focus groups are also very good for clarifying ‘lens of the customer’ thought processes and terminology, ensuring not only that the main survey questionnaire will be relevant to customers but also that it will minimise mis-interpretation since it will use the words used by customers to describe their requirements20. It is normal for CSM exploratory research to run four to eight focus groups with around eight21,22 customers in each, although more groups may be held for a complex customer base requiring segmentation. As shown earlier in Figure 5.1, this will normally identify at least 90% of customers’ requirements. Where there are segments
61
Chapter five
62
5/7/07
09:53
Page 62
Exploratory research
of customers who may hold very different views it is helpful to segment the groups. For example, the views of younger people and older people towards health care or pensions are likely to differ considerably. If so, it is not productive to mix them in the same focus group but better to run separate groups for younger and older customers. When running focus groups the following elements should be considered.
5.3.1 Recruitment Focus group participants can be invited personally at the point of sale, through street interviews or by telephone. It is important to provide written confirmation of all the details, such as time, location and anything respondents need to bring with them, e.g. spectacles. As well as reminding people the day before, usually by telephone, it is also normal to offer them an incentive23 to provide an extra reason to turn out of the house on a cold winter night, instead of settling down to watch TV. In the UK, incentives can vary from £20 to over £100 or more for very wealthy individuals. £50 is average. Higher rates are needed in London than the provinces and the more affluent the customer, the larger the incentive needs to be. Another critical factor is the strength of the relationship between the customer and the supplier. The weaker it is the more difficult it can be to generate any interest in or commitment to the focus group on the part of the customers. We were once simultaneously recruiting and running two sets of focus groups for two clients in financial services. One was in telephone banking and even with very high incentives recruitment was difficult and the attendance rate poor. The customers had no relationship with, and little commitment to, the supplier. The second client was a traditional building society. Many customers had held mortgages or savings accounts over a long period, personally visited their local branch and were loyal customers. Although the topics for discussion were virtually identical for both groups, the building society customers were far easier to recruit and the attendance rate was almost 100%. 5.3.2 Venues Focus groups can be run in hotels, at the supplier’s office, or even on site (e.g. bars, restaurants, sports venues) – anywhere with room for customers to sit round a table in reasonably comfortable and quiet surroundings. It is a good idea to consider what kind of venue will make the participants feel most relaxed. It needs to be somewhere they are familiar with and somewhere they see as convenient. The local pub’s function room for example, will often work better for attendance rates than the smart hotel further away. In many major cities there are specialist ‘viewing facilities’ for hosting focus groups. Though expensive they enable easy videoing and the ability to view the proceedings live. Information on finding these venues is provided in Appendix 2. 5.3.3 Running focus groups Focus groups are run by a facilitator (sometimes called a moderator) who will explain the purpose of the group, ask the questions, introduce any activities and generally manage the group. The facilitator clearly needs to be an excellent communicator and
Chapter five
5/7/07
09:53
Page 63
Exploratory research
an extremely good listener. They must also be strong enough to keep order, insist that only one participant is speaking at one time, prevent any verbose people from dominating the group and adhere to a time schedule. As well as needing a high level of expertise, the facilitator must also be objective which is why it is always preferable to use a third party facilitator for focus groups, especially for CSM22. The group will often start with a few refreshments giving an opportunity for participants to chat informally to break the ice. Once the discussion starts it is very important to involve everybody right at the beginning, so a focus group normally starts with a few easy questions that everyone in the group can answer21,23. Examples include simple behavioural questions such as the length of time they have been a customer, the products or services they buy and the frequency of purchase. Also suitable are simple customer experience questions, such as examples of great or poor service they have received in the sector concerned. 5.3.4 Identifying customers’ requirements Once everyone has said something a CSM focus group is effectively divided into two parts. The first part involves achieving the over-riding objective of CSM exploratory research – identifying the things that are important to them when selecting or evaluating suppliers. This can be done by simple questioning and discussion but it is more effective to use ‘projective techniques’24 to stimulate discussion, encourage some lateral thinking and, often, generate ideas that would not have resulted from normal question and answer sessions. There are many projective techniques that are used in qualitative market research, but the following are particularly appropriate to CSM exploratory research. (a) Theme boards One projective technique is grandly known as thematic apperception, which simply means using themes or images to uncover people’s perceptions25. Examples of the technique in action include asking people to draw pictures or cut pictures out of magazines which symbolise or remind them of the relevant area of customer activity. Less time consuming is to use theme boards as the stimulus material. There would typically be one board showing images which are positive or congruent with the brand concerned and one showing negative or incongruent images relating to the product/service in question. Theme boards will stimulate considerable discussion about customer experiences that have made an impact on participants and, consequently, things that are important to them. (b) Creative comparisons A creative comparison is an analogy, comparing an organisation or product which may have few distinctive features with something else that has far more recognisable characteristics. A common example would be a question such as: “If ABC Ltd were an animal, what kind of animal would it be?” Answers may range from sharks to elephants, but having elicited all the animals from
63
Chapter five
64
5/7/07
09:53
Page 64
Exploratory research
the participants, the facilitator will ask why they thought of those particular examples. The reasons given will start to uncover customers’ perceptions of the company in question. In addition to animals, creative comparisons can be made with people such as stars from the media or sporting worlds, all with the objective of highlighting things that are important to customers26. (c) The Friendly Martian The Friendly Martian is an excellent projective technique for getting respondents to talk through the decision process (the way they make judgements between one supplier and another) in order to get some clues about which things are important to them as customers. In CSM focus groups for a restaurant, for example, the Friendly Martian technique would be introduced as follows: “Let’s imagine that a Friendly Martian (an ET-type character), came down from outer space. He’s never been to the earth before, and you had to explain to this Friendly Martian how to arrange a meal out in a restaurant with some friends. What kind of things should he look out for, what does he need to know, what kind of things should he avoid? You’ve got to help this little guy to have a really good night out and make sure he doesn’t end up making any mistakes. What advice would you give him?” Since the little Martian doesn’t know anything, participants will go into much more detail and mention all kinds of things that they would have taken for granted if a direct question had been asked. 5.3.5 Prioritising customers’ requirements Having used some of the techniques outlined above to identify a long list of things that are of some importance to customers, the remainder of the focus group will be much more structured, following broadly the same steps that we outlined for the depth interview. First list on a flip chart all the customer requirements that have been mentioned in the discussions during the first half of the focus group and see if anybody can think of any more to add. Next ask all participants to nominate their top priority and score it out of ten to establish a clear benchmark in their minds. This should be done individually, so it is best to give out pencils and answer sheets enabling everybody to write down their individual views. Having established everybody’s top priority each participant can read down the list, again individually, and give every customer requirement a score out of ten, to denote its relative importance. Having completed all the groups you can again add up the scores given by all the participants and, typically, the top 15-20 requirements will be used for the questionnaire for the main survey.
5.4 Quantitative exploratory surveys As we have said, the qualitative phase will identify many factors of importance to
Chapter five
5/7/07
09:53
Page 65
Exploratory research
customers – too many for a reasonable length questionnaire for the main survey. To be absolutely certain that the questionnaire focuses on the right issues (i.e. the factors most responsible for making customers satisfied or dissatisfied), it can be very helpful to add a quantitative element to the exploratory research. Usually this would take the form of a telephone survey, though different methods of conducting quantitative surveys will be reviewed in detail in Chapter 7. To make the exploratory research quantitative, a statistically reliable sample size is necessary. This topic will be covered in detail in Chapter 6, but the minimum sample size for a CSM quantitative exploratory survey would be 200 interviews. As we said in Chapter 4, understanding what is important to customers is not as simple as it may appear so a reliable sample size provides the opportunity to use correlation techniques as well as stated importance to produce a fully rounded view of how customers judge the organisation. Figure 5.2 shows the stated importance scores out of 10 and the impact coefficients for a bank. The first column lists the attribute number, showing the order in which they were listed on the exploratory survey questionnaire. If the questionnaire for the main survey is based on the 15 most ‘important’ requirements, it would not include ‘friendliness of staff ’ or ‘appearance of staff ’, both of which record high ‘impact’ scores. If it is based on all requirements scoring over 8 for importance, ‘friendliness of staff ’ would squeeze in, but ‘appearance of staff ’ would remain excluded. A more rounded view would be provided by the total importance matrix shown in Figure 5.3. FIGURE 5.2 Importance and impact scores for personal banking 10 4 14 2 15 7 6 3 1 18 8 5 20 16 19 22 17 11 24 12 9 25 13 21 23
Confidentiality Quality of advice Efficiency of staff Reliability of transactions Expertise of staff Treating you as a valued customer Ability to resolve problems Empowerment of staff Speed of service in branch Reputation Flexibility of bank Level of personal service Cleanliness of the branch Interest rates on borrowings Accuracy of fees and charges Speed of response to applications Interest rates on savings Friendliness of staff The telephone service Ease of access to branch Opening hours ATM service Appearance of staff Layout of the branch Décor of the branch
Importance 9.62 9.45 9.39 9.21 9.01 8.99 8.92 8.86 8.73 8.64 8.49 8.44 8.36 8.36 8.28 8.15 8.07 8.01 7.88 7.67 7.44 7.42 7.23 7.04 6.86
Impact 0.31 0.66 0.48 0.31 0.46 0.61 0.69 0.59 0.42 0.65 0.66 0.45 0.27 0.53 0.36 0.30 0.43 0.49 0.34 0.11 0.21 0.12 0.51 0.29 0.41
65
Chapter five
09:53
Page 66
Exploratory research
FIGURE 5.3 Total importance matrix Stated Importance Low High Zone 3: Important Zone 4: Marginal
66
5/7/07
12 25
Low
Zone 2: Very Important
Zone 1: Critical
10
4
14 15
2
3
1 20
5 19
22
9
17
16
7
6 18 8
11
24 13
21
23 Impact
High
The total importance matrix is constructed from the stated importance scores (y axis) and the impact coefficients on the x axis. The axis scales are simply based on the range of scores for each set of data. The attribute numbers from the first column of Figure 5.2 are shown on the chart as there is not sufficient space for the requirement names. The requirements that should be carried forward to the main survey and subsequent tracking surveys are those closest to the top right hand corner. Based on the gaps between the diagonal clusters of attributes, it makes the decision of what to include on the main survey questionnaire relatively easy for the bank since there is a clear gap between the requirements above, or extremely close to the middle diagonal. This would result in a main survey questionnaire containing 20 customer requirements, which is the maximum length advisable (see Chapter 9), but does not exclude any attribute that has a strong case for inclusion based on either importance or impact scores. KEY POINT Although necessary only for organisations with a mature CSM process, using importance and impact scores from a statistically reliable sample will provide additional certainty that the main survey is asking the right questions.
5.5 Repeating the exploratory research It is not necessary to do exploratory research every time a customer satisfaction survey is undertaken. It is essential to do it before the first survey, or for any organisation that currently has a survey with ‘lens of the organisation’ questions. Of course, taking the concept of the lens of the customer to its logical conclusion, one
Chapter five
5/7/07
09:53
Page 67
Exploratory research
can’t assume that the factors determining customer satisfaction tomorrow will be the same as those responsible for it today. For example, environmental or ethical criteria may play a much bigger part in customers’ judgement of organisations in the future than they do today – or they might not. The point is, we just don’t know. For an accurate measure of customer satisfaction, the survey must always be based on the same criteria that customers use to make their satisfaction judgement. To this end, exploratory research for CSM would normally be repeated every three years to accommodate any newly emerging requirements that are important to customers or to confirm that the survey is still asking the right questions. It is not necessary to always ask the same questions for tracking comparability, since the headline measure of customer satisfaction (see Chapter 11), is a measure of the extent to which the organisation is meeting customers’ requirements. If their requirements change, it is in fact essential that the questionnaire also changes to maintain the comparability of the measure. Consider the implication of not repeating the exploratory research for ten years. Whilst one could argue that asking exactly the same questions gave comparability, the customer satisfaction index would be a measure of the extent to which the company is meeting the requirements customers had 10 years ago!
Conclusions 1. To ensure that CSM surveys are based on ‘the lens of the customer’, customers’ requirements must be accurately identified at the outset. 2. The only way to do this is to conduct qualitative exploratory research with customers. 3. Depth interviews are typically used for exploratory research in business markets whilst focus groups are more common in consumer markets. 4. For a greater degree of statistical confidence that the main survey is asking the right questions, a quantitative exploratory survey can also be conducted with the final decision based on a combination of importance and impact scores. 5. The 15 to 20 requirements of most importance to customers will usually form the basis of the questionnaire for the main survey. 6. Exploratory research should be repeated at least every three years to ensure that the survey remains focused on customers’ most important requirements.
References 1. Dabholkar, Thorpe and Rentz (1996) "A measure of service quality for retail stores: Scale development and validation”, Journal of the Academy of Marketing Science 24(1) 2. Helsdingen and de Vries (1999) "Services marketing and management: An international perspective”, John Wiley and Sons, Chichester, New Jersey 3. Zeithaml and Bitner (2000) "Services marketing: Integrating customer focus across the firm”, McGraw-Hill, Boston 4. Cooper and Tower (1992) "Inside the consumer mind: consumer attitudes to
67
Chapter five
68
5/7/07
09:53
Page 68
Exploratory research
the arts”, Proceedings of the Market Research Society Conference, The Market Research Society, London 5. Cooper and Branthwaite (1977) "Qualitative technology: new perspectives on measurement and meaning through qualitative research”, Proceedings of the Market Research Society Conference, The Market Research Society, London 6. Cronin and Taylor (1992) "Measuring service quality: An examination and extension”, Journal of Marketing 56 7. Carman, J M (1990) "Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions”, Journal of Retailing 66(1) 8. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, John Wiley and Sons, San Francisco, California 9. Robinson, Faris and Wind (1967) "Industrial Buying Behavior and Creative Marketing”, Allyn and Bacon, Boston 10. Brand, Gordon T (1972) "The Industrial Buying Decision”, John Wiley and Sons, New York 11. Buckner, Hugh (1967) "How British Industry Buys”, Hutchinson, London 12. Webster and Wind (1972) "Organizational Buying Behavior”, Prentice-Hall, Englewood Cliffs, New Jersey 13. Bonoma and Zaltman (1978) "Organizational Buying Behaviour”, American Marketing Association, Chicago 14. O’Rourke, Shea and Sulley (1973) "Survey shows need for increased sales calls”, Industrial Marketing Management 58, (April) 15. Van Der Most, G (1976) "Purchasing Process: Researching Influencers is Basic to Marketing Planning”, Industrial Marketing Management 61, (October) 16. Harding, Murray (1966), "Who really makes the purchase decision?”, Industrial Marketing Management 51, (September) 17. Crimp, Margaret (1985) "The Marketing Research Process”, Prentice-Hall, London 18. Rubin and Rubin (1995) "Qualitative Interviewing: The Art of Hearing Data”, Sage, London 19. Rust, Zahorik and Keiningham (1995) "Return on Quality (ROQ): Making service quality financially accountable”, Journal of Marketing 59(1) 20. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, Prentice Hall / Financial Times London 21. Szwarc, Paul (2005) "Researching Customer Satisfaction and Loyalty”, Kogan Page, London 22. Gordon, Wendy (1999) "Goodthinking: A Guide to Qualitative Research”, Admap, London 23. Haire, Mason (1950) "Projective Techniques in Marketing Research”, Journal of Marketing, (April) 24. Gordon and Langmaid (1998) "Qualitative Market Research”, Gower, Aldershot 25. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty Measurement” 3rd Edition, Gower, Aldershot
Chapter six
5/7/07
09:53
Page 69
Sampling
CHAPTER SIX
Sampling Asking the right questions is the most fundamental factor that determines the accuracy of a customer satisfaction measure but it is also essential to ask them to the right people. This is a matter of accurate sampling, and to highlight the problems caused by unrepresentative samples, let’s briefly consider the issue of ‘voodoo polls’. With the widespread adoption of instant communication methods such as mobile phones, texting and email, the media are awash with ‘voodoo polls’. They are a particular favourite of radio, where listeners are encouraged to make their views known on a topical issue of the day by sending a text message or an email. The radio programme will later announce the result of their ‘survey’. “76% of the British public think the death penalty should be restored” or “58% of listeners think the Prime Minister should resign”. It is vital to understand the difference between a controlled survey with a representative sample of the targeted population, and any kind of voluntary forum such as a phone-in or its electronic equivalent. Voluntary exercises where anyone motivated to do so phones in, emails or sends a text, suffer notoriously from unrepresentative samples dominated by people holding extreme views. These exercises have been labelled ‘voodoo polls’ and their results often bear little relation to what most people think, so are completely unreliable. Although the necessity of basing the results of a CSM survey on a representative sample of customers is widely acknowledged, the technical aspects of doing so are little understood and often neglected. Many are no better than voodoo polls.
At a glance This chapter will explain the theory and practicalities of sampling. In particular we will: a) Explore the statistical basis for sampling theory. b) Demonstrate how to generate a sample that is unbiased as well as representative. c) Explain how large a sample needs to be for CSM. d) Examine the special requirements of sampling in business-to-business markets.
6.1 Statistical inference Most scientific principles were developed by drawing conclusions or inferences from
69
Chapter six
70
5/7/07
09:53
Page 70
Sampling
‘observations’, typically generated by experiments. If the observations were representative of a much larger number of similar occurrences (like Newton’s apple and gravity), an important scientific fact was discovered. For this process to be of any use, scientists need to be confident that their sample of observations really does apply to the total population of such phenomena, events or behaviours1. First of all, this means that the sample of observations must be representative, i.e. without bias. You couldn’t say that water freezes at 0oC unless your experiments covered the full range of temperatures that it might freeze at. The way to ensure that samples are representative is to randomly select them. For the water freezing experiment you would randomly select your sample of observations from a much larger number of tests that were conducted. If you wanted to be confident that 76% of the British public really do want the death penalty back, you would have to ensure that your sample was randomly generated, so that it was totally unbiased, rather than rely on people who felt sufficiently opinionated to phone a radio programme. Secondly, since there is always some error associated with the process of experimentation or measurement, it is important to be able to quantify the amount of error, or ‘margin of error’ that might apply to the results.
6.2 Measurement error As long ago as the 17th century scientists studying astronomy and geodesy were trying to understand why measures of the same thing, such as size, shape or distance, often differed, albeit only slightly. Was it inaccurate instruments or errors by the scientists? The difference between individual observations and the real measure became known as observation or measurement error2. The types of error being considered in the 17th century would now be called systematic error. 6.2.1 Systematic error Imagine we wanted to establish the average height of adult males in the UK. A very reliable way to do this would be to take a tape measure around the country, physically measure every male over 18 years old and calculate the mean height. Whilst accurate, this would be a very time consuming and costly exercise. It is therefore common practice to base the results on measuring a sample but to fully understand the outcome, it is necessary to be aware of the types of measurement error that can occur. The first is ‘systematic error’, and is relatively easy to identify and eliminate. If our sample contained a lot of young men but not many old ones (there was a systematic age bias), we would almost certainly conclude that British males are taller than they really are. This problem can obviously be eliminated by checking that the sample contains the correct proportions of each age group. Another possible measurement error would be a poor tape measure that was systematically over or under recording the real height of the males measured. With either of those errors present, it wouldn’t matter how many times we did the survey or what the sample size was, we would always get the answer wrong in a systematic way.
Chapter six
5/7/07
09:53
Page 71
Sampling
6.2.2 Random error By the 17th century, scientists began to realise that errors could happen by chance and this second type of measurement error became known as random error3. This is harder to explain in an intellectual way because, fundamentally, it means bad luck. When we were measuring the men, we could have had a scientifically calibrated measuring device and a sample that was representative according to every demographic variable yet invented. However, if we were unlucky, our randomly sampled young, middle aged and older men could have been slightly shorter than other men of their generation, giving us an average height that was below the real mean for all UK adult males. As we will see in Chapter 11, provided the sample is random and representative, this margin of error or ‘confidence interval’ can be quantified4.
6.3 Reliable samples Organisations that want to know the accuracy of their customer satisfaction index, or any other customer satisfaction measures they are monitoring, will have to ensure that their sample is random, representative and large enough. This section explains how these essential elements of a reliable sample are achieved. 6.3.1 Random samples The technical term for a random sample is a ‘probability sample’. Its key characteristic, as far as research is concerned, is that it is totally without bias, because in a probability sample, everybody in the population concerned stands an equal chance (probability) of ending up in the sample5. An obvious example of a random sample is a lottery. Each ball or number remaining in the lottery pool stands an equal chance of being the next ball drawn. Clearly, no element of bias can affect the selection of numbers in a lottery. Since absence of bias will be a critical factor in the credibility of a customer satisfaction measure, a probability sample is clearly essential. KEY POINT Only a random sample will ensure an unbiased result. To draw a random sample a clearly defined sampling frame is needed. Broadly speaking it is an organisation’s population of customers, but its precise definition requires careful thought6. Organisations that measure customer satisfaction annually would typically include in the sampling frame all customers who have dealt with the business in the last twelve months. However, that would not be very sensible for a call centre measuring the satisfaction of customers that had contacted it to make an enquiry, renew a policy or change a direct debit. For this type of limited customer experience it would be normal to use a much shorter time frame, such as customers contacting the call centre in the last month.
71
Chapter six
09:53
Page 72
Sampling
6.3.2 Generating an unbiased and representative sample To randomly generate a representative sample for a CSM survey first sort the database into relevant customer segments, for example age and gender. Using the example shown in Figure 6.1, the database is sorted so that all the under 25 year old males are first on the list followed by all the under 25 females and so on. FIGURE 6.1 Database sorted into segments 25 s der n U F M
25
4 -3
M
s
F
r 44 s Ove
F
M
M F
s -44 35
72
5/7/07
If there were 1000 customers on the list the starting point is to generate a random number between 1 and 1000 and begin sampling from that point. So, if the random number came out as 346, the 346th customer on the list would be the first to be sampled. For a sample of 100, which is 1 in 10 of the population concerned, every 10th customer is sampled. So in this example the process would sample the 356th customer on the database, the 366th, the 376th, and every 10th name thereafter, until arriving all the way back round to the 336th customer on the list. This would produce a random sample of 100 customers. Before the randomly generated number, every customer on the list stood an equal chance of being included in the sample. So the sample will be completely without bias, and it will also be representative since it will inevitably have sampled 1 in 10 of the customers in each segment. This is known as a systematic random sample and is the best way of producing a sample that is both random and representative7. Systematic random sampling would be problematical only if there was a structure to the database that would cause a particular type of customer to be completely missed by the exercise. For example, if every alternate name on the list was male and every other name female, any sampling fraction that was an even number would select only males or females. Of course, in the real world this is a very unlikely eventuality for CSM, especially if the database is sorted into blocks of segments. In our example described above, the sampling automatically selects the same proportion of both genders and all other segments of interest.
Chapter six
5/7/07
09:53
Page 73
Sampling
6.4 Sampling in business-to-business markets In a B2B market, sampling will be a two-step process. First a randomly selected and representative sample of organisational customers must be generated. Of course, only individuals can be surveyed, not organisations, so the second step involves sampling the individual contacts. They must also be representative and randomly selected. 6.4.1 Sampling the organisations For many companies in B2B markets that have a strong Pareto Effect in their customer base, systematic random sampling will not produce a satisfactory outcome. If a large proportion of a company’s business comes from a small number of high value customers and a much smaller percentage from a very large number of relatively low value customers, any random sampling process will inevitably capture many small customers and few big ones, as shown in Figure 6.2. This would clearly not be representative, so to achieve a sample that is representative as well as unbiased in most B2B markets, stratified random sampling has to be used8. FIGURE 6.2 Random sampling and the Pareto Effect 25% Random sampling would generate too many small and not enough large customers perceptions
% of revenue
20% 15%
Systematic random sampling has the same effect
10%
95%
85%
75%
65%
55%
45%
35%
25%
15%
0%
5%
5%
% of customers
KEY POINT To achieve a representative sample, companies with a strong Pareto Effect to their customer base need to use stratified random sampling. Producing a stratified random sample involves dividing the customers into value segments first and then sampling randomly within each segment. Illustrated in Figure 6.3, the sample will be representative according to the value contributed to the business by each segment of customers. In the example shown, the company derives 70% of its turnover from high value customers. The fundamental principle of sampling in a B2B market is that if a value segment accounts for 70% of turnover (or profit, or however you decide to define it), it should also make up 70% of the sample.
73
Chapter six
74
5/7/07
09:53
Page 74
Sampling
If the company has decided to survey a sample of 200 customers, 140 respondents (70% of the sample) would be required from the high value customers. There are 35 high value customers so that necessitates a sampling fraction of 4:1, meaning 4 contacts from each customer in the high value segment. In business markets it is common practice to survey several individuals from the largest customers. Since there will often be quite a large number of people in the DMU of a large customer (See Chapter 5), having enough contacts to survey is rarely a problem. FIGURE 6.3 A stratified random sample Value segment
% of turnover
% of sample
No of customers
Sampling fraction
High
70% 20% 10%
70% 20% 10%
35 120 300
4:1 1:3 1:15
Medium Low
In the example, the medium value customers account for 20% of turnover so they must make up 20% of the sample. That means the company needs 40 respondents from its medium value customers. Since there are 120 customers in that value segment the sampling fraction would be 1:3, necessitating a random sample of 1 in every 3 medium value customers, which could be easily produced using the same systematic random sampling procedure described earlier. First generate a random number between 1 and 120. If the random number came out as 71, the 71st medium value customer on the list would be sampled, followed by the 74th, the 77th and so on until the sampling process came back round to the 68th medium value customer on the list. Finally, 10% of the company’s business comes from low value customers so they must make up 10% of the sample, requiring 20 respondents in this example. There are 300 low value customers, which would mean a sampling fraction of 1:15, again produced using systematic random sampling from a random starting point within the low value customer segment. By the end of the process the company would have produced a stratified random sample of customers that was representative of its business and, due to its random selection would also be without bias. 6.4.2 Sampling the contacts The procedure described above has produced a random and representative sample of B2B customers but the individual respondents who will take part in the survey must also be selected. Organisations often choose the individuals on the basis of convenience – the people with whom they have most contact, whose names are readily to hand. If the individuals are selected on this basis, an element of systematic error is introduced. It would mean that however carefully a stratified random sample of companies had been drawn, at the 11th hour it has degenerated into a convenience sample of
Chapter six
5/7/07
09:53
Page 75
Sampling
individuals that somebody knows – little better than a voodoo poll. To avoid that major injection of bias the individuals must also be randomly sampled. Compiling a list of individuals who are affected by the product or service at each customer in the sample and then selecting the individuals randomly from that list is the way to do this. KEY POINT For a reliable result, individual contacts in a B2B market must also be randomly sampled. Illustrated in Figure 6.4, the process works as follows. First list the DMU roles in a random order. In our hypothetical example, the DMU roles are Sales (S), Quality (Q), Purchasing (P) and Senior Management (M). It is important to be clear that these are roles not job titles, as titles vary considerably across organisations. For the high value customers a census of contacts as well as a census of companies will be required. To sample the medium value customers a random number of 71 was generated, so the 71st medium value customer on the list would be sampled with a contact from Sales to be surveyed. Taking every third medium value customer, the 74th on the list would need someone with responsibility for Quality, the 77th a Purchasing contact and the 80th someone in a Senior Management role. As shown in Figure 6.4, the same procedure is then followed for the low value segment. In business-to-business markets, following the stratified random sampling approach described above is essential for an accurate and credible CSM result9. It provides a random and representative sample of organisations and individuals, so it will be an accurate result whose statistical reliability can be justified. At least as important in B2B markets, colleagues not versed in the technical aspects of CSM will see it as ‘reliable’ and credible because it accurately reflects the realities of the business, covering all the key accounts and only a sample of smaller ones.
Medium
Customer 1 Customer 2 3 4 5 Etc. Customer 71 Customer 72 73 74 75 76 77 Etc.
S 1 1 1 1 1
P 1 1 1 1 1
Q 1 1 1 1 1
1 1 1
M 1 1 1 1 1
Small
Large
FIGURE 6.4 Randomly sampling the individuals S Customer 191 Customer 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 Etc
P
Q
1
1
M
75
Chapter six
76
5/7/07
09:54
Page 76
Sampling
6.5 Sample size The process so far described has generated a random and a representative sample, both of which are essential for an accurate measure. However, as we said at the beginning of this chapter, organisations need to know how much measurement error there might be in their result, and the size of the sample is instrumental in determining this margin of error. We will therefore consider how many customers it is necessary to survey to achieve a statistically reliable result. Some companies, typically in business-to-business markets, have a very small number of accounts. Other companies have millions of customers. In a business market the size of the population for a CSM survey is the number of individual contacts rather than the number of organisations on the database. Even so, companies in B2C markets often have many more customers than their B2B counterparts. This will help to illustrate a very commonly misunderstood rule of sampling. Statistically the accuracy of a sample is based on the absolute size of the sample. A bigger sample will always be more accurate than a smaller one10 regardless of how many customers the organisation has. Asking what proportion of customers should be surveyed is not a relevant question. Imagine the answer to the question was 10%. A B2B company with 1,000 customer contacts would have to survey a sample of 100, which seems OK. However, a company in a B2C market with 2,000,000 customers would have to survey 200,000 – clearly an excessive number. Alternatively, if the answer were 0.1%, the B2C company would have to survey 2,000 customers but the B2B supplier would be surveying only 1 person! Clearly, the answer cannot possibly be any specific percentage of the customer base. KEY POINT The reliability of a sample is based on the absolute size of the sample rather than the proportion of the customer base surveyed. Back to our quest to discover the average height of adult males. Whilst it would be a gross exaggeration to say that no two men are the same height, the range, from the shortest to the tallest does vary widely – from around 3 feet to over 7 feet. If we decided to base our average height of UK adult males on a sample of ten, our result would have a high margin of error. Even if our sample was random and representative and had absolutely no systematic error there would be a high risk of a small sample of ten males being affected by random error. One of our randomly selected males could be 7 feet 9 inches tall. Unlikely, but it is within the known range and could happen if we were unlucky. If it did, it would have a strongly disproportionate effect on the result. As the sample size increases, two things happen to greatly improve its reliability. Firstly, the impact made by an exceptionally tall or short male on the mean height reduces as the sample size grows. By the time the sample size reaches 200, an exceptionally tall male could not distort the mean height by more than about 0.1 inch.
Chapter six
5/7/07
09:54
Page 77
Sampling
Secondly, the probability of ending up with many unusually tall or short males decreases as the sample size gets bigger for the simple reason that there aren’t many of them. Most men are not very tall or very short, they’re somewhere in the middle. Close to 5 feet 10 in fact. That’s why it is the average, because that’s more or less the height most men are. So as the sample size increases the probability of getting more men of normal height increases simply because there are a lot more of them out there. Provided the sample is random and representative, the accuracy of a survey result will be determined by two things. Firstly the sample size and secondly the extent to which the customers, people or units in the population differ. If all adult males were exactly the same height, the number of men you would need to measure to know the average height, even with a population of 20 million, would be precisely one. If all the customers held identical views, you would have to interview only one customer to understand the attitudes of the entire customer base. Conversely, if the customers’ views differ widely, you would have to interview a lot of them to be confident of your answers. Equally, the more variety in the height of adult males, the larger the sample needed to produce a reliable measure of average height. FIGURE 6.5 Normal distribution curve
Extreme data
Normal data
Extreme data
The normal measure of variation in numerical data is the standard deviation, which is explained in Chapter 10. Standard deviations are unique to every set of data, although there are norms for different types of survey. Since our company conducts several hundred customer satisfaction surveys each year, we know that for CSM, standard deviations are relatively low compared with many other types of survey. In fact the average standard deviation for a customer satisfaction survey is around 11 on a 100 point scale.
77
Chapter six
09:54
Page 78
Sampling
Figure 6.6 shows how the reliability of a sample increases as it gets bigger. At first, with very small sample sizes the reliability increases very steeply, but as it grows there are diminishing returns on reliability from further increases in sample size. To understand how the level of reliability is calculated, see the explanation of margin of error in Chapter 11. Figure 6.6 is specific to customer satisfaction surveys. It shows that the curve starts to flatten at around 50 respondents, and by the time the sample size has reached 200, the gains in reliability from increasing the number of respondents in the sample are very small. Consequently, a sample size of 200 is widely considered to be the minimum sample size overall for adequate reliability for CSM. Companies with a small customer base should simply carry out a census survey. Since response dates will vary considerably across different industries and between methods of data collection (see next chapter), companies with up to 600 customers will often find is most efficient to include a census, especially for self-completion surveys. FIGURE 6.6 Sample size and reliability Reliability
78
5/7/07
0
.
50
100
150 200 Sample size
Two additional factors must be taken into account when considering sample reliability; firstly, the extent to which the result must be broken down into different sub-groups and secondly, the response rate. 6.5.1 Drilling down So, we have established that a sample of 200 provides good reliability for an overall measure of customer satisfaction whether the customer population is 500 or 500,000. However, organisations that want to drill down into the results to compare the satisfaction levels of different segments may need a larger sample. For example, a sample of 200 broken down into 10 regions would result in a small and unreliable
Chapter six
5/7/07
09:54
Page 79
Sampling
sample of 20 customers for each region. Therefore, it is generally accepted that the minimum overall sample size is 200 and the minimum per segment is 50 – the point at which the curve starts to flatten. KEY POINT For a reliable measure of customer satisfaction, the minimum sample is 200 respondents overall and at least 50 per segment. For some companies, therefore, the total sample size may be determined by how many segments they want to drill down into. For example, organisations wanting to divide their result into 6 segments would need a sample of at least 300 to ensure 50 in every segment. This can have a major impact for companies with multiple branches or outlets. On the basis of 50 per segment, a retailer with 100 stores would need a minimum sample of 5,000 if customer satisfaction is to be measured at store level. However, our view is that if comparisons are to be made between stores and management decisions taken on the basis of the results, at least 100 customers per store should be surveyed and preferably 200. For a retailer with 100 stores, this would result in a total sample size of 20,000 customers for a very reliable result at store level. 6.5.2 Sample size and response rates One final point on sampling. The recommended sample size of two hundred for adequate reliability is based on responses and not the number of customers sampled and invited to participate. For example, if the response rate to a postal survey was 50%, 400 customers have to be sampled and mailed questionnaires. However, if the response rate is very low, it is not statistically reliable to compensate by simply sending out more questionnaires until achieving 200 responses. Low response rates are extremely detrimental to the reliability of customer satisfaction measures and will be explored in more detail in the next chapter. KEY POINT For a reliable result it is essential to acheive a good response rate as well as 200 responses.
Conclusions 1. For an unbiased result a probability (random) sample is necessary. 2. For CSM, the sampling frame would typically be all current customers, but may need a time frame qualification for brief, one-off customer experiences. 3. In B2C markets systematic random sampling will generate a sample that is both unbiased and representative. 4. In B2B markets a sample that accurately represents the huge variation in customer values will be achieved only through stratified random sampling.
79
Chapter six
80
5/7/07
09:54
Page 80
Sampling
5. Sampling should be based on a sampling frame comprising relevant individuals. In B2B this will often involve a number of individual contacts (occasionally a large number) from high value customers. 6. Based on typical standard deviations for customer satisfaction surveys, 200 responses is the minimum sample size for a reliable measure at the overall level whatever the size of the customer base. 7. Organisations with fewer than 200 customers or contacts should conduct a census survey. 8. If the results are to be broken down into segments, the minimum sample size per segment should be at least 50 responses. In such cases the total sample size will be the number of segments multiplied by 50. 9. As well as enough responses it is also essential to achieve an adequate response rate, and this will be covered in the next chapter.
References 1. Norman and Streiner (1999) "PDQ Statistics”, BC Decker Inc, Hamilton, Ontario 2. Bennett, Deborah (1999) "Randomness”, Harvard University Press, Cambridge, Massachusetts 3. Pearson and Kendall (1970) "Studies in the history of statistics and probability”, Charles Griffin and Co, London 4. Kotler, Philip (1984) "Marketing Management: Analysis, Planning and Control”, Prentice-Hall International, Englewood Cliffs, New Jersey 5. Hays, Samuel (1970) "An Outline of Statistics”, Longman, London 6. Kish, Leslie (1965) "Survey Sampling”, John Wiley and Sons, New York 7. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, Prentice Hall / Financial Times, London 8. Crimp, Margaret (1985) "The Marketing Research Process”, Prentice-Hall, London 9. McIntosh and Davies (1996) "The sampling of non-domestic populations”, Journal of the Market Research Society 38 10. Kotler, Philip (1984) "Marketing Management: Analysis, Planning and Control”, Prentice-Hall International, Englewood Cliffs, New Jersey
Chapter seven
5/7/07
09:54
Page 81
Collecting the data
CHAPTER SEVEN
Collecting the data Exploratory research and sampling can be seen as essential pre-requisites of conducting a customer satisfaction survey. A questionnaire will also be needed, but this can be designed only when it is known how the survey will be administered. Market research textbooks call this the method of data collection.
At a glance In this chapter we will: a) Describe the different methods of data collection b) Review the advantages and disadvantages of each method c) Explore ways to maximise response rates d) Explain how to introduce the survey to customers e) Discuss respondent confidentiality f) Consider how often customers should be surveyed. Fundamentally there are only two methods of collecting data for a customer survey1. Customers can be interviewed or they can be asked to complete a questionnaire by themselves. Of course, there are different ways of interviewing people and more than one type of self-completion questionnaire, so we will start by clarifying the options within each method of data collection.
7.1 Self-completion surveys For customer satisfaction measurement there are two basic choices of selfcompletion survey – electronic or paper. When considering the different types of paper or electronic surveys, the choice is between different ways of getting the questionnaire out to and back from the customers. 7.1.1 Electronic surveys We should initially distinguish web from e-mail surveys. An email survey involves sending questionnaires to customers by e-mail, either in the form of a file attachment or in the body of the email itself. The customer completes the questionnaire off-line and returns it to the sender.
81
Chapter seven
82
5/7/07
09:54
Page 82
Collecting the data
A web survey involves logging onto a web site and completing a questionnaire on-line. When the respondent clicks a button to submit the questionnaire on completion, the information is automatically added to the database of responses. A web survey is normally conducted over the internet but can be set up on an intranet for internal customers. It is usually preferable to an e-mail survey since it avoids the software problems that can be experienced with file attachments, looks more attractive and professional, tends to facilitate more questionnaire design options (such as routing), and eliminates data entry costs. However, a disadvantage of web surveys in B2B markets is that in some organisations there will be employees who are authorised to use e-mail but not the internet so the target group may not be fully accessible. People who do respond to electronic surveys will often do so more quickly than for other types of self completion survey but there is no evidence that response rates are higher. Indeed, whilst a few years ago, the term ‘junk mail’ typically referred to unsolicited postal mail, it is increasingly email that suffers from this problem, especially in B2B markets. It is not in the least unusual for managers, especially in larger organisations, to receive well over 100 emails each day. Since instant deletion of everything unnecessary is most people’s survival strategy in this situation, the odds are stacked heavily against a survey email. Web surveys, typically in the form of site exit surveys, are useful for e-commerce businesses, especially for measuring perceptions of the website itself. However, even for e-commerce businesses this type of exit survey should not be confused with a full measure of customer satisfaction since it would precede order fulfilment and ignore any requirement for after sales service. For a worthwhile measure of customer satisfaction, e-businesses should invite a random sample of customers to complete a web survey that covers the total customer experience. Even then, a web survey with a low response rate will suffer from non-response bias in the same way as a postal survey with a poor response rate. B2C businesses whose customers come to them, e.g. hotels, restaurants and retailers can set up an electronic survey on their own premises on a laptop or a specialist touch screen computer. Apart from the initial investment in the capital equipment, this would have all the advantages of a low cost paperless survey, but has to be very simple to allow for people who are not computer literate. As with any survey conducted at the point of sale, it may not capture the entire customer experience and it will reflect customers’ ‘heat of the moment’ attitudes rather than their more objective long term judgements, which will be a better lead indicator of their future loyalty behaviour. A similar method is IVR (interactive voice response), which involves customers using the telephone keypad to respond to an automated survey. If IVR is used, customers are typically transferred to the survey at the end of a call centre interaction, so it suffers from the same ‘heat of the moment’ disadvantages as the touch screen. Also,
Chapter seven
5/7/07
09:54
Page 83
Collecting the data
unless there is an automated process that randomises the sample transferred to the survey, call centre staff may not put through dissatisfied customers. IVR surveys suffer from customers’ general dislike of automated telephone response systems and do have to be very short to minimise early terminations. IVR is therefore not normally suitable for measuring customer satisfaction. Email surveys are easier to set up than web surveys since a good web survey process needs to: Be set up and hosted on a website Have a system for informing and inviting the target group to visit the site to participate (normally an email invitation with a link) Be able to issue potential respondents with a unique password Have appropriate security in place to ensure that only targeted respondents with verifiable passwords can complete the survey Ensure that respondents can only complete the survey once, by checking against passwords Be able to remind non-responders if necessary Be simple but attractive in design and quick to load Allow for tick-box, scoring and text responses Have various checks in place to ensure that only valid responses are allowed and to check for completeness of response Be thoroughly tested for technical operation Transfer responses to a data repository Transform responses into an input file format compatible with the statistical analysis programmme Be capable of transferring the data to the analysis system at regular intervals or on request. Despite the greater setup costs of web surveys compared with email surveys, they do exploit the benefits of the electronic medium much more extensively, so the advantages and disadvantages of electronic surveys will be reviewed in the context of web surveys. KEY POINT Taking everything into account, web surveys are more efficient than email surveys (a) Advantages of web surveys 1. The major advantage is the lower cost. With no printing, postage or data capture costs, a large survey will be considerably cheaper. For smaller surveys the savings would be lower since the fixed cost elements will account for a larger proportion of the total. 2. One benefit of the instant data capture is that the results of the survey can be examined whilst it is in progress.
83
Chapter seven
84
5/7/07
09:54
Page 84
Collecting the data
3. With appropriate software, routing (missing out questions that are not relevant) and probing of low satisfaction scores are possible. 4. Sophisticated web interviews are increasingly feasible and can be undertaken provided the respondent has equipment of the right standard. This might involve playing music, videos, showing pictures, using speech in questions and capturing speech in responses. These will increase in popularity as the technology becomes more widely adopted. (b) Disadvantages of web surveys 1. Web surveys must be easy and thoroughly tested since respondents will quickly give up if they experience problems completing the questionnaire. 2. Most internet users do not stay very long at any site so questionnaires have to be short, resulting in the collection of less information than from most other methods of data collection. 3. Since only a few questions can be displayed in one screen view, the questionnaire will often feel longer as respondents move from one ‘page’ to the next, with the full extent of the questionnaire being difficult to visualise. By contrast, the length of a postal survey can be assessed almost instantly. 4. Some respondents will be worried about their responses being identified with their name or e-mail address, thus rendering the survey non-confidential. This is a particularly pressing concern when employees or internal customers are surveyed by means of an intranet questionnaire. This will be reduced by using an independent third party as the invitation e-mail address and the host of the web questionnaire. Interestingly, the most suspicious respondents are IT specialists. 5. Low response rates will afflict many types of self-completion survey, but for all the reasons listed above, they are a particular problem for web surveys. Surveys with a low response rate suffer from the problem of ‘non-response bias’2. This means that the sample of respondents that completed the questionnaire is not representative of the full population of customers. Results can be adjusted at the analysis stage to eliminate some forms of bias such as demographic variables. Attitudinal bias, however, cannot be corrected since it is not possible to know what attitudes the non-responders hold. It is attitudinal bias that is normally the biggest problem for customer satisfaction surveys with low response rates, since customers that have encountered a problem or hold negative attitudes about the supplier are typically much more motivated than the silent majority to respond. Since the response rate needed to eliminate non-response bias is at least 30% and the average response rate for self-completion customer satisfaction surveys is 20%, the scale of the problem is considerable. Web surveys tend to suffer from even lower response rates than postal surveys. In extreme cases this will result in the inverse of a normal distribution, with many respondents scoring very low or very high and few in the mid-range of the scale. 6. Regardless of response rates, it is extremely difficult for many B2C businesses to
Chapter seven
5/7/07
09:54
Page 85
Collecting the data
generate a representative response from web or e-mail surveys since many of their customers are not active internet users. Household access to the internet in the UK was 61% at the end of 2006, only 4% higher than 2 years previously, with the rate of growth slowing3. This suggests that it will be some time before electronic surveys will be capable of providing representative samples for most B2C businesses. 7. Off-the-shelf software, whilst plentiful and cheap, can be unsuitable for customer satisfaction surveys. Common problems include inadequate password protection systems, an inability to probe low satisfaction scores and the failure to offer a ‘not applicable’ option. Some web surveys refuse to allow the respondent to proceed to the next question if a score has not been given, forcing customers with no views or no experience of that attribute to invent a score. This quickly leads most people to disengage from the process. Some web survey software purports to do the analysis and reporting of the results as well as the data collection, but analysis modules are often very simplistic and restrictive, making it impossible to achieve the essential outcomes customer satisfaction measurement must provide (see chapters 11 to 15). KEY POINT It is not possible for many B2C organisations to achieve a representative sample from an electronic survey 7.1.2 Paper-based surveys Self-completion surveys have traditionally been conducted on paper, usually through the post although fax is possible as are other distribution media such as a customer newsletter. An ideal way of undertaking a self-completion survey is to personally distribute questionnaires to customers and then collect them in once completed – an approach that is often feasible with internal customers or with external customers who visit the company’s premises. Distribution and collection is far preferable to simply making questionnaires available for customers who choose to take the opportunity to fill them in – an approach often associated with hotels. Response rates for this latter type of survey are extremely low, often below 1%, resulting in enormous non-response bias. This type of survey can fulfil a valid role as an additional complaints channel but should never be taken seriously as a measure of customer satisfaction. Distribution and collection is hugely preferable since a very good response rate can typically be achieved from this approach. Most paper-based surveys are postal and would involve a questionnaire, an introductory letter and a postage paid reply envelope mailed to a representative and randomly selected sample of customers. A deadline for responses should always be clearly marked on the letter and on the questionnaire and this would normally be two weeks after customers have received the questionnaire. In addition, it is good practice to allow at least another week for late responses before cutting off the survey and analysing the results.
85
Chapter seven
86
5/7/07
09:54
Page 86
Collecting the data
(a) Advantages of postal surveys 1. Although slightly more costly than electronic surveys, postal surveys are usually much cheaper than interviewing customers. 2. From a practitioner’s point of view, postal surveys are very easy to conduct. Web survey software does not have to be purchased or learned and interviewers do not have to be recruited, briefed or monitored. 3. If professionally designed and printed, paper questionnaires can be made visually attractive for customers. 4. There is no risk of interviewer bias. 5. Many customers will see a postal questionnaire as the least intrusive form of survey. 6. A postal survey returned to a third party will also be seen by respondents as the most confidential and anonymous survey method. (See Section 7.4 for a review of the advantages and disadvantages of respondent anonymity). (b) Disadvantages of postal surveys 1. Response is slow. Even with a clearly marked deadline, some questionnaires will come back much later. 2. As we said for web surveys, a low reponse rate results in ‘non-response bias’ which seriously distorts the result. The lower the response rate the bigger the problem. It is vital to understand that the sample size and the response rate are two completely different things. It is essential to have a sample of at least 200 responses and a response rate of at least 30% - not one or the other. 3. Compared with telephone interviews, neither routing nor probing of low satisfaction work well on paper-based questionnaires. These drawbacks can be alleviated by providing a clearly marked ‘not applicable’ option for questions that are not relevant to a respondent and by including one or more spaces for comments with encouragement to customers to explain any low satisfaction scores that they gave. On average, comments are written by one in three respondents to postal surveys. 4. Due to the inability to probe, little explanatory detail will usually be generated1. When trying to understand the reasons for any customer dissatisfaction, this is a considerable disadvantage. KEY POINT For a reliable result the sample should be at least 200 responses and the response rate at least 30%. 7.1.3 Maximising response rates Whilst the average response rate for customer satisfaction surveys by post is around 20%, this masks an extremely wide variation from below 5% to over 60%. Typically, the more important the topic is to the customer, the higher the base response rate will be4. For example, a satisfaction survey for a membership organisation is likely to generate a higher response rate than a survey by a utility company. In business
Chapter seven
5/7/07
09:54
Page 87
Collecting the data
markets customers are more likely to complete a survey for a major supplier than a peripheral one. Since it is vital to avoid non-response bias, it is worthwhile to make as much effort as possible to maximise the response rate. This section outlines the main ways of doing so. We will start with things that are essential before examining the additional measures that can be taken in order of their effectiveness. (a) Basic foundations of a good response rate There are two things that won’t increase the response rate to a customer satisfaction survey but can significantly reduce it. The first is an accurate, up to date database including contact names, addresses, telephone numbers, email addresses and job titles as appropriate. The accuracy of databases can erode by 30 per cent annually as personnel change in business and as consumers move house or change telephone numbers and email addresses. The second essential is a postage paid reply envelope5. Expect a significantly reduced response rate if it is omitted. Some people are tempted to try fax-back questionnaires in business markets on the grounds that it might be easier for respondents to fax their responses back. Experience shows that generally this assumption is mistaken. Many people in large offices do not have easy access to a fax machine, and their use is declining as email grows. Therefore, by all means include a fax-back option and a prominent return fax number, but include a reply paid envelope as well. International reply paid envelopes are also available and should be included for overseas customers. (b) Effective techniques for boosting response rates Introductory letter The introductory letter is the single most effective technique for boosting response rates. Research by Powers and Alderman6 found that covering letters had a significant impact on response rates. Since it is so important, Section 7.4 is devoted to explaining how to introduce the survey to customers. In our experience a good covering letter highlighting benefits to respondents and promising feedback will boost response rates by around 30 per cent on average. To clarify the figures, this is a 30% increase over the base response. Therefore, an average response rate of 20% achieved using none of the techniques detailed in this chapter could be boosted by 30%, thus lifting it to a 27% response rate. If the introductory letter is mailed on its own, two or three days before the questionnaire, it will typically achieve an additional 15% uplift7, boosting the response rate in this example to around 31%. Reminders A follow-up strategy is also widely endorsed by the research studies. The word strategy is important because more than one reminder will continue to generate additional responses, albeit with diminishing returns. A multiple follow-up strategy has been widely reported to have a positive effect on response rates8,9,10. It is advisable to send a duplicate questionnaire with the follow-up plus a letter
87
Chapter seven
88
5/7/07
09:54
Page 88
Collecting the data
repeating the reasons for taking part in the survey. A reminder boosts response rates by 25% on average, lifting our hypothetical example to approximately 39%. Subsequent reminders will also stimulate more responses, albeit at a declining rate. In practice it would be very unusual to issue more than two reminders, but a second follow-up up will typically improve the response rate by a further 12%, increasing the total in our example to around 43%. The questionnaire Questionnaire design, more than length, is a significant factor. If respondents’ initial impression is that the questionnaire will be difficult to complete the response rate will be depressed. Apart from very long questionnaires, length is a less significant factor11, so it is better to have clear instructions and a spacious layout spreading to four sides of A4, rather than a cluttered two page questionnaire. More specifically, it makes no difference to response whether people are asked to tick or cross boxes or to circle numbers or words, nor whether space is included for additional comments. Since people are more likely to respond when they are interested in the subject matter2, any design techniques used to stimulate customers’ interest will be worthwhile. Manchester United and Chelsea, for example, display background images of star players on their fan satisfaction survey questionnaires, as well as sending introductory letters from well known names such as Sir Alex Ferguson and Peter Kenyon. A professionally designed questionnaire that is appealing, easy to read and spacious can improve response rates by up to 20%, resulting in our fictitious example now climbing to a response rate of around 50%. Of course, a cluttered and difficult to read or otherwise amateurish questionnaire will significantly reduce response rates. Anonymity It is conventional wisdom that response rates and accuracy will be higher where respondents are confident of anonymity and confidentiality. Practitioner evidence strongly supports this view for employee satisfaction surveys and most types of customer satisfaction survey, especially in business markets, where the respondent envisages an ongoing personal relationship with the supplier. In mass markets where personal relationships are normally absent there is no conclusive evidence that anonymity increases response5. The best approach is to promise anonymity at the beginning of the questionnaire (or interview) and at the end, when respondents know what they have said, give them the option of remaining anonymous or being attributable. See section 7.5 for a full discussion of anonymity. KEY POINT A good introductory letter is the best way to maximise response rates for a customer satisfaction survey.
Chapter seven
5/7/07
09:54
Page 89
Collecting the data
(c) Marginal techniques for boosting response rates Compared with the suggestions already made, other response boosting techniques will generally be marginal in their effectiveness, but there are some that can make small differences. Money Money is one of them. Not incentives generally but money specifically. And money now! Research in the USA has shown that quite modest monetary reward, such as $1 attached to the introductory letter will have a significant effect in business as well as domestic markets12. It can also work very well in less developed countries where dollars have real street value. However, it is important that the money goes with the questionnaire. Future promises of payment to respondents are less effective and incentives such as prize draws, much less effective. This is confirmed by UK research from Brennan13, Dommeyer14 and James and Bolstein15. Some people cheekily suggest that researchers can reduce costs by enclosing money only with the first reminder, a tactic that might be feasible if you have a sample of individuals who are unlikely to communicate with each other! Colour The use of colour is a contentious issue. Some people advocate the use of coloured envelopes or the printing of relevant messages on envelopes. However, it is generally accepted that around 10 per cent of mailshots are discarded without opening, so if the colour or design of the envelope give the impression of a mailshot they are likely to depress the response rate. However, since customers would normally open letters from existing suppliers, such as their bank, utility supplier, local authority or any organisations they deal with, we recommend including the organisation’s name and logo on the envelope. Apart from that it should be a plain white window envelope (not a sticky address label), personally addressed to the customer5. Use of colour on the questionnaire should also be considered. It is generally accepted that the use of more than one colour for printing the questionnaire will enhance clarity of layout and ease of completion and will therefore boost response rates. This is part of the earlier point on good questionnaire design. Some people think that printing the questionnaire on coloured paper may also help because it is more conspicuous to people who put it aside, intending to complete it in a spare moment. However, there is no conclusive evidence on paper colour, and since text on coloured backgrounds will be more difficult for some people to read, we recommend white paper with two or preferably four colour print. (d) Ineffective techniques for boosting response rates There are some frequently used response boosting techniques that are rarely costeffective since there is no conclusive evidence that they consistently improve response
89
Chapter seven
90
5/7/07
09:54
Page 90
Collecting the data
rates, they may reduce quality of response and they are usually costly. They concern various types of incentive including: Prize draws Free gifts Coupons Donations to charity When considering incentives it is important to distinguish between direct mail and customer satisfaction measurement. There is widespread evidence in the direct marketing industry about the effectiveness of appropriate incentives in boosting the response to mailshots. Most direct mailshots involve sending out huge volumes of letters, with a very small percentage of those mailed expected to purchase the product. Due to the volumes a prize draw can be amortised over the cost of a large mailing, and if it boosts the response rate from 1% to 1.3% for those placing an order or taking out a subscription it will have been successful. By the same token, an attractive free gift may be a cost-effective price to pay for a new subscriber who, once hooked, may renew for many years. Customer satisfaction surveys are different. Since the response rate without an incentive is much higher than for the vast majority of direct mail, the uplift in response has to be much greater for customer satisfaction surveys to make an incentive worthwhile. Secondly, the value of each response is not commensurate with a purchase from most mailshots. An attractive free gift costing £20 is hardly going to be costeffective for each returned questionnaire. Thirdly, and most importantly, you should consider the impact that typical incentives will make on your customers. They’re not appropriate. They give the impression that you’re trying to sell them something. They devalue the very serious purpose of a customer satisfaction survey and obscure the real benefits for customers that are inherent in the process. For these reasons, incentives for customer satisfaction surveys will often be detrimental to the quality of the response without even boosting the response rate by a worthwhile amount. KEY POINT Incentives are generally not a cost-effective technique for boosting response rates in customer satisfaction surveys. Research carried out in the USA including a study by Paolillo and Lorenzi16 suggests that the chance of future monetary reward, e.g. a prize draw, makes no difference unless it is very large. Also in the States, Furse and Stewart17 reported no effect from the promise of a donation to charity. Research in the UK by Kalafatis and Madden18 suggests that the inclusion of discount coupons can even depress response rates, probably because they give the impression that the survey is sales driven. Customers are increasingly suspicious of incentives and prizes, since there has been much adverse publicity for scams involving unsolicited phone calls to people who have, supposedly, won a valuable prize. They are then fraudulently asked to pay an amount
Chapter seven
5/7/07
09:54
Page 91
Collecting the data
of money to secure the prize. Sometimes, credit card or bank account details are stolen as well as the money. FIGURE 7.1 Boosting response rates Introductory letter First reminder Respondent friendly questionnaire Advance notice letter Second reminder letter Incentive
30% 25% +/-20% 15% 12% <10%
Based on all the academic research and tests referenced in this chapter, plus the experience of ourselves and other practitioners, Figure 7.1 indicates the average effect on response rates of the measures discussed. It assumes a reasonable questionnaire mailed to a correctly addressed person and including a postage paid reply envelope and suggests the likely increase in your base response rate. So a 25 per cent improvement on the average 20 per cent response rate would result in a 5% uplift to a 25% per cent response rate.
7.2 Interviewing Customers can be interviewed face-to-face or by telephone. Although telephone interviews are much more common for customer satisfaction measurement, we will initially consider the face-to-face options. 7.2.1 Face-to-face interviews There are many options for conducting personal interviews. Exit interviews are conducted as people complete their customer experience. Customers can be surveyed as they leave a shop, finish their meal in a restaurant or check out of a hotel. Customers can also be interviewed during their customer experience. This can be very cost-effective if customers have time on their hands, such as waiting at an airport or travelling on a train. Doorstep interviews are convenient for consumers, and, with prior arrangement for long interviews, can be conducted inside the home. Business customers can be interviewed at work at a time convenient to them. Street interviews are efficient if a large part of the population falls within the target group. Whilst the most common method of personal interviewing involves writing the respondentsâ&#x20AC;&#x2122; answers onto paper questionnaires, there are alternatives. If speed is vital, computer assisted personal interviews (CAPI) can be used. Interviewers are
91
Chapter seven
92
5/7/07
09:54
Page 92
Collecting the data
provided with palm top computers from which responses can be downloaded daily. With enough interviewers, a large survey can be conducted within days. Long interviews (typically in the home or the office) can be recorded so that the detail of customer comments can be captured more efficiently. However, although appropriate for depth interviews at the exploratory stage, this is not a common occurrence at the quantitative stage of customer satisfaction surveys. For CSM main surveys interviews will usually be short (typically 10 minutes), and most questions will be closed. As with all methods of data collection, face-to-face interviews have advantages and disadvantages. (a) Advantages of face-to-face interviews Personal interviews have a number of important advantages: 1. It is easier to build rapport with the respondent in the face-to-face situation. 2. It is much easier to achieve total respondent understanding. Not only can complex questions be explained but also with face-to-face interviews it is usually possible to see if the respondent is having a problem with the question. 3. Visual prompts such as cards and diagrams can be used, to visibly demonstrate the range of responses on a rating scale for example. 4. Personal interviews can be very cost-effective with a captive audience, such as passengers on a train, spectators at a sporting event or shoppers in a busy store, because where there are plenty of people in one place it is often possible to conduct large numbers of interviews in a short space of time. 5. In some situations, such as visiting people at home or at their place of work, it is feasible to conduct quite long interviews, up to half an hour, allowing plenty of time to explore issues in some depth and gather a considerable amount of qualitative information. (b) Disadvantages of face-to-face interviews There are disadvantages to personal interviews, mainly relating to the cost. 1. Personal interviews will almost always be the most costly data collection option. 2. Customers are often scattered over a wide geographical area so more time can be spent travelling than interviewing. It is not unusual in business-to-business markets to average fewer than two personal interviews per day, and often to average below one per day if the survey is international. As well as the time involved, the travel itself is likely to be costly. 3. Since many people do not like to give offence, (by giving low satisfaction scores for example), there may be a tendency to be less frank in the face-to-face situation. This unintended interviewer bias19 will be exacerbated if the interviewer is employed by the organisation conducting the survey. Since the interviewer makes much more impact on the respondents in the face-to-face situation than in telephone interviews, even genuinely unintended behaviour such as body language and tone of voice may influence respondents20,21. A
Chapter seven
5/7/07
09:54
Page 93
Collecting the data
typical example in customer satisfaction interviews is showing empathy with customers detailing a very distressing experience with a supplier. Whilst it is a natural human instinct to be sympathetic, this may encourage the respondent to dwell on their dissatisfaction and may negatively bias subsequent responses. 4. With interviews taking place remotely and due to the problems outlined above, face-to-face interviewers have to be particularly well trained and quality control procedures extensive and consistently followed22. This adds significantly to the cost of face-to-face interviewing. 5. The challenges outlined in points 3 and 4 above can be further magnified in business markets, where the respondents will usually be senior people. They will soon become irritated and often alienated from the process if they feel that the interviewer does not fully understand the topics under discussion, and in a face-to-face situation this soon becomes obvious. Therefore, to achieve the advantages of longer interviews with in-depth comments, it is essential to use high calibre â&#x20AC;&#x2DC;executiveâ&#x20AC;&#x2122; interviewers who can hold a conversation at the same level as the people they are interviewing, and this is very costly. 6. Obtaining a representative sample can also be very difficult1 as some types of customer, e.g. old people and wealthy people tend to be reluctant to welcome an interviewer into their home. There are also other types of customer, e.g. those living in deprived, potentially dangerous neighbourhoods, where interviewers are increasingly reluctant to go. KEY POINT Telephone interviews are much more common than face-to-face for customer satisfaction surveys. 7.2.2 Telephone interviews A second interview option involves contacting customers by telephone, typically at work in business markets and at home in consumer markets. Responses can be recorded on paper questionnaires or, using computer assisted telephone interviews (CATI), data can be captured straight onto the computer. CATI systems have significant capital cost implications and also higher set-up costs for each survey compared with paper-based telephone interviewing. Consequently, whilst CATI will not be cost-effective for small scale surveys, it will be significantly less costly for large samples and frequent surveys23. For CSM main surveys telephone interviews are much more common than face-to-face since they have a number of advantages. (a) Advantages of telephone interviews 1. Telephone interviews are almost always the quickest controllable way of gathering main survey data. 2. They are relatively low cost and normally much less costly than face-to-face interviews.
93
Chapter seven
94
5/7/07
09:54
Page 94
Collecting the data
3. The two-way communication means that the interviewer can still explain things and minimise the risk of misunderstanding. 4. It is possible to gather reasonable amounts of qualitative information in order to understand the reasons underlying the scores. For example, interviewers can be given an instruction to probe any satisfaction scores below a certain level to ensure that the survey identifies the reasons behind any areas of customer dissatisfaction. 5. Distance is not a problem even in worldwide markets. 6. It is by far the best data collection method for achieving a random and representative sample. 7. Telephone interviews reduce interviewer bias as perceived anonymity is greater1. 8. The ability to monitor interviewers also makes it the method of data collection with the tightest quality control. CATI further improves quality control by eliminating the possibility of many errors such as incorrect routing or the recording of inadmissible answers2. Tests show that the impact of interviewer bias is neither increased nor reduced by CATI compared with paper-based telephone interviews24. 9. Call-backs can be managed to maximise response rates. 10. Provided CATI is used, headline results can be continuously provided as the survey progresses. (b) Disadvantages of telephone interviews 1. Interviews cannot be as long as those achievable by face-to-face interviews. 10 minutes is enough for a telephone interview especially when interviewing consumers at home in the evenings. Up to 15 minutes can be acceptable for business interviews during the day. For the vast majority of CSM main surveys this is an adequate length of time and is comparable with the time one can expect customers to devote to filling in a self-completion questionnaire. 2. Questions have to be straightforward. As we will see when we look at rating scales in Chapter 8, there are certain types of question that cannot be used on the telephone. 3. One of the biggest frustrations with telephone surveys is that people are not sitting on the other end of the telephone waiting to be interviewed! It is usually necessary to make multiple call-backs to get a reliable sample25, as shown by the statistics in Figure 7.2. However, although multiple call-backs add to the cost of a telephone survey, they are very feasible and form a significant reason why telephone interviews are more likely to provide a random and representative sample (and hence a reliable result) than any other method of data collection. 4. Although not as acute as in face-to-face interviews, there is still potential for the interviewers to bias the responses20,21. Telephone surveys require highly
Chapter seven
5/7/07
09:54
Page 95
Collecting the data
trained interviewers22. For all interviews they need to be sufficiently authoritative to persuade respondents to participate and sufficiently relaxed and friendly to build rapport without deviating from the question wording. As with personal interviews, telephone interviews in business markets need â&#x20AC;&#x2DC;executiveâ&#x20AC;&#x2122; interviewers of high calibre who can communicate at the same level as the respondent. 5. Whilst less costly than face-to-face interviews, telephone interviews are more costly than self-completion methods. (c) Call-backs In household markets the hit rate tends to be better than the figures shown in Figure 7.2 but in business markets it can easily be worse. For that reason it would be good practice to make at least three call backs for domestic interviews and at least five in business markets to ensure good sampling reliability2. FIGURE 7.2 Call-backs in telephone surveys Average number of attempts required to make contact in telephone surveys 1
attempt reaches
25% of the sample
5
attempts reach
85% of the sample
8
attempts reach
95% of the sample
12
attempts reach
100% of the sample
7.3 Choosing the most appropriate type of survey 7.3.1 Interview or self-completion Most organisations will reject the personal interview option because of cost and practicality and web surveys due to the impossibility of achieving a reliable sample. Therefore, the choice for most organisations will be between a telephone and a postal survey. The postal option will almost certainly be cheaper, so the first question would be to consider the viability of achieving a reliable sample using a self-completion questionnaire. If the questionnaire can be personally handed to customers and collected back from them, a good response rate will be easily achievable. Passengers on an aeroplane or customers in a restaurant would be good examples. Sometimes, as with shoppers in a store, personal collection and distribution is feasible but a suitable location for filling in the questionnaire would have to be provided. However, although this method is good for response rates it will often not provide an accurate or useful measure of customer satisfaction because it cannot cover the full extent of the customer experience. What if customers have a problem with subsequent delivery reliability of the goods purchased in store, or aeroplane passengers experience a
95
Chapter seven
96
5/7/07
09:54
Page 96
Collecting the data
problem on landing, retrieving their baggage or leaving the airport? For most organisations customers are more remote, so mail remains the only practical distribution option for a self-completion questionnaire. The probability of achieving an acceptable response rate will therefore have to be estimated. The key factor here is whether customers will perceive the organisation as an important supplier or the product as one that interests them. If they do, it will be feasible to achieve a good response rate. If they do not, and most suppliers over-estimate their importance the customer, even following all the advice in this chapter will probably not be sufficient to lift the response rate to a reliable level. Examples of organisations in this position include utilities and many financial and public services. In these cases, a telephone survey is the only sensible option. KEY POINT Unless very large samples are necessary, telephone interviews are usually the best data collection option for a reliable customer satisfaction measurement process. Even when an adequate response rate can be achieved by mail, the telephone response rate will be higher and much more detailed customer comments will be gathered. In particular, reasons for low satisfaction can be probed and this depth of understanding will be very helpful when determining action plans after the survey. Telephone surveys are therefore often the preferred option of many organisations both in business and consumer markets. A very large sample size is typically the main reason for selecting the postal option, since large samples will significantly increase the cost differential between postal and telephone surveys. A supermarket, for example, may want a reliable sample for each of several hundred stores and this would be extremely costly if customers were interviewed by telephone. 7.3.2 Mixed methods If feasible it is strongly recommended to use one method of data collection for CSM since it avoids the unnecessary introduction of variables that may skew the results. It is very unusual for a survey method that is suitable for most customers to be impractical for some important customer groups. Usually when mixed methods are considered it is for the wrong reasons e.g. cutting costs. For example, it may be feasible to conduct a low cost web survey for some customers (e.g. those for whom the organisation has email addresses), with a different method of data collection used for the rest. However, this will make it very difficult to draw reliable and actionable conclusions about how to improve customer satisfaction and impossible to unequivocally monitor the organisationâ&#x20AC;&#x2122;s success in improving customer satisfaction over time as the mix of customers for whom the organisation has email addresses changes. A more valid reason for adopting mixed survey methods may occur in business
Chapter seven
5/7/07
09:54
Page 97
Collecting the data
markets due to the very large differences in the value of large and small accounts. Just as the organisation invests more in servicing its key accounts, it might also choose to invest more in surveying them. Personal interviews might therefore be used for key accounts since a longer interview will be possible and this will enable the company to gain a greater depth of understanding of key accountsâ&#x20AC;&#x2122; perceptions. It will also have relationship benefits since it will demonstrate to the key accounts that they are considered to be very important, an impression that may not be conveyed by a telephone interview or a postal survey. The data collection methods could be mixed even further with telephone interviews used for medium value accounts and a postal survey for small ones. Provided the same questions are asked in the same way, the responses will have some limited comparability but it is important to ensure that this happens. Any additional discussion or additional questions used in the personal interviews with key accounts must come after the core questions from the telephone and postal surveys to ensure that additional discussions (which are absent from the telephone and postal surveys) cannot influence the respondentsâ&#x20AC;&#x2122; answers to the core questions. Although the responses to the questions will be somewhat comparable, the reliability of the results across the three methods may not be. It is likely that lower response rates will reduce the reliability of the results from the low value customers compared with the other two segments, but this may be considered by many companies to be a price worth paying. Assuming that the alternative would be a telephone survey across the board, the net effect of this threetiered approach is to shift investment in the survey from the low value accounts to the key accounts, a decision that is likely to be compatible with the companyâ&#x20AC;&#x2122;s normal approach to resource allocation across the customer value segments. However, even if the same method of data collection (typically telephone interviews) were applied to all customers, the stratified random sampling approach described in the previous chapter would ensure that the survey focused far more on large accounts than small ones. We would therefore not normally recommend a mixed approach even in business markets, particularly since, if it were adopted, it would be vital for future tracking that exactly the same mixed methods of data collection are used across customer groups, in the same proportions, for future updates, and this would add a great burden of time, resources and costs to the CSM process in the long run. KEY POINT It is rarely beneficial to use mixed methods of data collection for customer satisfaction surveys because comparability will be compromised. 7.3.3 Consulting customers about survey decisions Sometimes organisations believe that customers would prefer to be consulted about whether or how they want to be surveyed and that following this approach will increase response rates. In its simplest form this could involve writing to customers
97
Chapter seven
98
5/7/07
09:54
Page 98
Collecting the data
before the survey with a freefone number for them to call if they would prefer not to take part. A more costly approach would be to also ask those happy to participate how they would prefer the survey to be administered. To do this, a response mechanism, such as a reply-paid card, would have to be included, or customers could be telephoned. A major UK consumer finance company recently tried this approach through a pre-survey telephone call. Most of the customers sampled agreed to take part; approximately 80% selecting email as their preferred method. This resulted in a mainly email survey with only 20% receiving a postal questionnaire. Despite the use of the costly pre-notification by telephone, only 20% of customers emailed responded. By contrast, the postal survey achieved an exceptionally high 70% response, buoyed by the very high effectiveness of the telephone pre-notification. Had all customers been surveyed by post, the overall response rate would have been much higher and the company would have avoided the problem of data comparability. KEY POINT The views customers express about how they would like to be surveyed do not relate to subsequent response rates. This mistaken approach originates in some organisationsâ&#x20AC;&#x2122; belief that their customers are somehow different to other human beings. Whilst they may have specific requirements of the organisationâ&#x20AC;&#x2122;s product or service (hence the need for exploratory research), they are not different from other people in most aspects of their daily lives. People are customers of many organisations, so conclusions about how customers generally respond to surveys should guide data collection decisions. The key conclusions are that customers appreciate being consulted in a professional manner about their satisfaction, so very few take advantage of an opt-out option. However, the views they express about how they would like to be surveyed do not provide a reliable guide to subsequent response rates, so organisations should use the single most suitable method: telephone for the best response, postal where good response rates are achievable or electronic for a customer base of heavy internet users.
7.4 Introducing the survey As suggested earlier, the way the survey is introduced to customers will make the biggest single difference to the way they perceive the exercise, improving both the response rate and the quality of response. It is crucial therefore that all customers in the sample receive prior notification of the survey in the form of an introductory letter, whatever the method of data collection26. As we saw earlier in the chapter, the notification should ideally be prior to, rather than simultaneous with the survey. If the introductory letter is included with a postal questionnaire or read out when customers are telephoned for an interview, it will be much less costly, but also less effective. To fully appreciate this, think about peopleâ&#x20AC;&#x2122;s typical decision making
Chapter seven
5/7/07
09:55
Page 99
Collecting the data
process when invited to take part in a survey. It is usually instantaneous and based on whether the individual is busy and it is a convenient time. Often people will decline to take part purely on grounds of inconvenience with very little thought about the nature of the survey. People are far more likely to respond if they are interested in the aims and outcomes of the research and see it as useful. They are far less likely to take part if they perceive the survey as a general information gathering exercise of no benefit to themselves, and especially if they associate it with selling. This is why an introductory letter is more effective as a stand alone mailing before the questionnaire is sent or the customer is telephoned. In business or consumer markets people will open a personalised letter and read it, especially if it is from an organisation they deal with. This enables the supplier to make customers think about the purpose and benefits of the survey at a time when they are not being asked to take part. As we pointed out in Chapter 1, most people think it is very positive when an organisation asks for feedback on its performance. Consequently, if they receive the introductory letter beforehand, with more time to think about the purpose of the survey, they are much more likely to take part when the questionnaire arrives or they are contacted by telephone, and the surveyâ&#x20AC;&#x2122;s PR value will be maximised. KEY POINT For maximum effectiveness the introductory letter should be sent on its own, a few days before mailing the questionnaire or starting interviews. As we will explain in more detail in Chapter 17, carrying out a customer survey also provides an opportunity to enhance the organisationâ&#x20AC;&#x2122;s image by demonstrating its customer focus, and the introductory letter will play an important role here. Conversely, carrying out a customer survey in an amateurish or thoughtless way could damage its reputation. There are three main aspects of introducing the survey and these concern whom to tell, how to tell them and what to tell them. 7.4.1 Who? As a minimum everyone sampled to take part in the survey must be contacted, but some organisations inform all customers since it demonstrates to the entire customer base that the business is committed to customer satisfaction and is prepared to invest to achieve it. This can be a significant factor in making customers see the organisation as customer-focused. Where companies have a very large customer base, this communication could become costly, depending on the media used. If budgets are not sufficient, the survey can be introduced to those sampled to participate in the survey, and communication to the entire customer base provided in the form of feedback after the survey. 7.4.2 How? This will clearly depend on the size of the customer base. For companies in business markets with few customers it may be productive to explain the process personally to
99
Chapter seven
100
5/7/07
09:55
Page 100
Collecting the data
each one through well briefed customer contact staff. For most organisations a personalised introductory letter is the most cost-effective option. With a very large customer base a special mailing would be costly although it is worth considering its long-term effectiveness in building customer loyalty compared with a similar spend on advertising. If cost does rule out a special mailing, it is often possible to use existing communication channels to inform customers of the CSM programme. This may include printing a special leaflet to be enclosed with an existing mailing or creating space in an existing customer publication such as a newsletter. If feasible, communications at the point of sale or service can be very effective in preconditioning of customers who may later be contacted to take part in the survey. These could include posters in hospitals, on stations or in retail stores outlining the benefits to customers that have resulted from the organisationâ&#x20AC;&#x2122;s CSM process. They could include informative leaflets in hotels, restaurants or any other premises visited by customers. They could also include a letter handed to customers when a service has been completed or a product purchased, which also gives the member of staff the opportunity to encourage customers to take part in the survey. 7.4.3 What? There are 3 things that customers should be told when the survey is introduced: (a) Why it is being done. (b) How it will be done. (c) The feedback that will be provided afterwards. (a) The purpose of the survey An example of an introductory letter is shown in Figure 7.3. The starting point is to explain that the purpose of the survey is to identify whether customersâ&#x20AC;&#x2122; requirements are being fully met so that action can be taken to improve customer satisfaction where necessary. It is worth emphasising the high priority of customer satisfaction for the organisation, its commitment to addressing any problems perceived by customers and the importance of feedback from customers to highlight the areas concerned. (b) The survey details Customers clearly need to know what form the survey will take, so tell them whether it will be a telephone interview, a postal questionnaire or any other type of survey. If the introductory letter accompanies a postal questionnaire the method of survey will be obvious but it should still explain the instructions for completing and returning the questionnaire. For all methods of data collection, the introductory letter should emphasise that the time commitment will not be burdensome â&#x20AC;&#x201C; ten minutes to undertake a telephone interview or complete a questionnaire. Assuming the organisation is adhering to the good practice of not asking any individual to take part in a survey more than once a year, the letter can emphasise that customers are only being asked for a maximum of 10 minutes per annum to provide feedback on their satisfaction. Last but not least, for interviews, the second paragraph should stress that
Chapter seven
5/7/07
09:55
Page 101
Collecting the data
an appointment will be made to interview customers at a time convenient to them. (c) Feedback Research evidence suggests that promising feedback is the single most effective element of the introductory letter for increasing response rates6. The introductory letter must therefore inform customers that they will receive feedback on the results and on the key issues that have been identified by the survey. It should also promise that the organisation will share with customers the actions that it plans to take to address any issues. This helps enormously in convincing customers that taking part in the survey will be a worthwhile exercise. FIGURE 7.3 Introductory letter Introductory letter Dear... As part of our ongoing commitment to customer service at XYZ, we are about to conduct our annual survey to measure customer satisfaction. I would therefore like to enlist your help in identifying those areas where we fully meet your needs and those where you would like to see improvements. We attach the utmost importance to this exercise since it is your feedback that will enable us to continually improve our service in order to make all our customers as satisfied as possible. I believe that this process needs to be carried out independently and have therefore appointed ABC Ltd, an agency that specialises in this work, to carry out the exercise on our behalf. They will contact you in the near future to arrange a convenient time for a telephone interview lasting approximately 10 minutes. Since we undertake not to ask customers to participate in a survey more than once a year, we are asking you for no more than 10 minutes per annum to provide this feedback. ABC will treat your responses in total confidence and we will receive only an overall summary of the results of the interviews. Of course, if there are any particular points that you would like to draw to our attention you can ask for them to be recorded and attributed to you personally if you wish. After the survey we will provide you with a summary of the results and let you know what action we plan to take as a result of the findings. I regard this as a very important step in our aim of continually improving the level of service we provide to our customers and I would like to thank you in advance for helping us with your feedback. Yours sincerely XXXXXX Chief Executive Officer XYZ Ltd.
7.5 Confidentiality There has been much debate about whether respondents should be anonymous or named in customer satisfaction surveys. Before suggesting an approach, we will review both sides of the confidentiality debate.
101
Chapter seven
102
5/7/07
09:55
Page 102
Collecting the data
7.5.1 Confidentiality – the advantages 1. In the market research industry, respondent anonymity has traditionally been the norm on the assumption that confidentiality is more likely to elicit an impartial response from respondents1,4,27. This is based on evidence showing that respondents’ answers can differ significantly when they are interviewed by people whom they know. If customers are likely to have a continuing relationship with an employee, such as an account manager, they may not want to offend the employee or may want to protect a future negotiating stance. For example, if a salesperson personally conducts a customer satisfaction interview with his or her customers, are they likely to give honest answers to questions about the performance of the salesperson? Even if the salesperson does not personally conduct the interview (or if a self-completion questionnaire is used), respondents’ answers may still be influenced if they know that their responses will be attributed and the salesperson will see them. 2. Problems caused by lack of confidentiality are often exacerbated in post-event surveys that are completed on the spot by the customer and collected in by the supplier. A typical example would be a service provided in the home, such as an electrical or plumbing installation. Although good for response rates, customers will often be deterred from honesty by the employee, who has provided the service, watching them fill it in, especially since many of the questions will refer directly to the individual concerned. The system is also open to abuse by unscrupulous employees who may try to influence the respondent or may contrive to ‘lose’ questionnaires recording low scores. As with non-response bias, a survey suffering from employee-induced bias could be very misleading and a very dangerous basis for decision making. 3. Confidentiality is also supported by considering the distinctive role of research compared with other customer service initiatives. Research should focus on the ‘big picture’ rather than on individual customers. It is normally undertaken with a sample of customers rather than a census but is intended to accurately represent the entire customer base. Its value for management decision making is in highlighting trends, problems with processes, widely held customer irritations which can be addressed by introducing improvements that benefit all customers and improve the company’s position in the marketplace. The purpose of research is not to highlight specific problems with individual customers for two reasons. Firstly, even a very serious problem (between one customer and an individual sales person, for example), may not be representative of a wider problem that needs corrective action. Secondly, research is not the best way to identify specific problems with individual customers. Organisations should have effective customer service functions and complaints systems for this purpose. Relying on a periodic survey of a sample of customers to identify problems with individual customers suggests very poor management of customer service.
Chapter seven
5/7/07
09:55
Page 103
Collecting the data
4. Whatever the frequency of customer satisfaction surveys, when they take place repeatedly customers learn about the process and draw conclusions that affect their future behaviour. If they can be open and honest without any personal repercussions, they are more likely to be honest in future surveys. On the other hand, if customers learn that the organisation, their sales person or any other employee is using survey responses to manage specific relationships with customers, they may gradually learn to ‘use’ the survey process. At first they may simply avoid making negative comments that may harm personal relationships with one or more employees of the supplier but over time they may learn to become more manipulative, seeking to use the survey process to influence pricing, service levels or other aspects of the cost benefit equation. 7.5.2 Confidentiality – the disadvantages 1. The obvious disadvantage of respondent confidentiality is that it gives the organisation no opportunity to respond to and fix any serious problems causing dissatisfaction and perhaps imminent defection of individual customers. Some organisations see the customer satisfaction survey as an additional service recovery opportunity, using a ‘hot alert system’ to immediately highlight any serious problems with individual customers, so that they can be resolved. Companies using this approach maintain that the opportunity to prevent possible customer defections outweighs the advantages of respondent confidentiality. Even one potential defection is worth averting. 2. In business markets, particularly those with very close customer-supplier relationships, the case against confidentiality may be even stronger. In these circumstances, suppliers may feel that customers would expect them to know what they had said in the survey and to respond with proposals to address their specific problems28. 3. A further disadvantage of anonymity is that it makes it impossible to add responses to the customer database to use modelling techniques to project responses onto other, similar customer types. For organisations with a very large customer base this can be very helpful in classifying customers and identifying segments of customers for tailored CRM initiatives. 7.5.3 The right approach to respondent confidentiality Our view is that the most important principle underpinning data collection is that the views gathered should accurately represent the views held by respondents. If there is a chance that lack of respondent confidentiality could compromise the reliability of the data collected, the price paid for knowing ‘who said what’ is far too high. However, a compromise approach is possible. Respondents can be promised confidentiality but at the end of the interview, with full knowledge of what they were asked and how they replied, can be asked whether they wish to remain anonymous or whether they would be happy to have their views linked with their name. A space can
103
Chapter seven
104
5/7/07
09:55
Page 104
Collecting the data
be included on a self-completion questionnaire to provide name and personal details for any respondents wishing to do so. Some customers, especially those in business markets who have developed a partnership approach with suppliers, will prefer their comments to be attributed to them. Equally, for those who prefer anonymity, the confidentiality of the interview and the survey process will be protected. In consumer markets customers will often provide their details if they require a response from the supplier, e.g. to resolve a problem. With this approach, a hot alert system can still be used, but only with respondents who have consented to being named. Of course, if an organisation decides that its survey will not be anonymous, the Data Protection Act, as well as ethics, dictate that this must be made totally clear to respondents before they take part â&#x20AC;&#x201C; a disclosure that will typically have an adverse effect on response rates as well as the accuracy of the data. KEY POINT Customers should be promised confidentiality at the start of the interview or questionnaire, but at the end can be asked if they are happy for their views to be attributed. 7.5.4 Legal issues The main legislation affecting surveys in the UK is the 1998 Data Protection Act. Its main purpose as far as surveys are concerned is to ensure confidentiality in the collection and use of personal data. For anonymous research surveys this is not an issue, but where respondentsâ&#x20AC;&#x2122; responses are to be linked with their name, or other identifiable personal data, they must be told beforehand or asked to give their permission at the end of the interview or questionnaire. If attributable, data should be stored for no longer than necessary. The Act does not specify the length of time, but one year would be reasonable since it is not unusual for data to be re-analysed to produce additional information at a later date. A further relevant distinction is the purpose of the research. Surveys conducted solely for research purposes should conform to the requirements outlined above. Surveys collecting data for other purposes such as sales and marketing activities must specify the exact use(s) to which that data will be put before asking the customer to agree to their responses being recorded, stored on a database and used for those purposes. If the respondent agrees, the data can subsequently be used only for the specific purposes that the customer has approved. So if the customer had agreed to the data being used for targeting mailshots for example, it would not be permissible to use the information for tele-sales. The Market Research Society Code of Conduct (which is not legally binding but guides good practice), gives detailed guidance about the implications of the Data Protection Act for researchers. For full details of the Data Protection Act or the Market Research Society Code of
Chapter seven
5/7/07
09:55
Page 105
Collecting the data
Conduct, see the web addresses in Appendix 2.
7.6 When to survey The remaining decisions to be taken about the survey concern timing and frequency. Of these, frequency should be considered first. 7.6.1 Continuous tracking or periodic surveys Customer satisfaction surveys can be periodic or continuous. As well as surveys where the data are collected continuously and reported monthly or quarterly, continuous tracking also refers to frequent surveys, e.g. every month or every quarter, even if the data are not collected continuously over that time. For periodic surveys the data collection happens at a point in time, providing a snapshot picture of customers’ level of satisfaction. Improvement initiatives can then be undertaken with the next periodic survey providing evidence about the organisation’s success in improving customer satisfaction. Surveys are categorised as periodic if the survey intervals are at least six months apart. (a) Continuous tracking Surveys are more likely to be continuous when the customer-supplier relationship revolves around a specific event or transaction, such as taking out a mortgage, buying a computer, calling a helpline or going on vacation. In these circumstances it is very important that customers are surveyed very soon after the event before their memory fades. Surveying customers weeks after an uneventful stay in a hotel or a brief conversation with a telephone help desk is completely pointless. Unless the event was very important, e.g. the purchase of a new car or a piece of capital equipment, customers should be surveyed no more than four weeks after the event. Results from continuous tracking surveys are usually rolled up and reported monthly or quarterly. The big advantage of frequent reporting is that managers do not have to wait too long for evidence that their customer service initiatives are working and it helps to keep the spotlight on customer satisfaction within the organisation. Event driven surveys tend to be tactical in nature and operational in focus. They often feed quickly into action but are more likely to be volatile. The closer to the event, the more volatile they will be. If customers are surveyed at the point of service delivery, as they check out of a hotel for example, their responses will be very heavily influenced by their recent experience27. A disappointing breakfast a few minutes earlier may result in poor satisfaction ratings across the board. Conversely, a very pleasant experience could have a positive ‘halo effect’. Consequently, this type of ‘post transaction’ survey may not give an accurate measure of customers’ underlying satisfaction nor be a reliable guide to future purchasing behaviour. The irate customer who was made to wait too long to check out of the hotel may vow, in the
105
Chapter seven
106
5/7/07
09:55
Page 106
Collecting the data
heat of the moment, not to return, but some weeks later when next booking a hotel, the incident will have waned in importance and a more measured purchase decision will be made. Therefore, whether data collection is continuous or periodic, surveying customers at least a week or two weeks later, away from the point of sale or service delivery will provide a much more accurate measure of underlying customer satisfaction and a better prediction of future loyalty behaviour29. KEY POINT Surveying customers away from the point of sale provides a more reliable measure of underlying customer satisfaction and loyalty. (b) Periodic surveys Periodic surveys are more suited to ongoing customer-supplier relationships and are often more strategic in focus. Questions will cover the total product and will focus on customersâ&#x20AC;&#x2122; most important requirements. Periodic surveys are normally conducted annually or bi-annually. Before making a decision on frequency it is useful to consider the issues highlighted by the Satisfaction Improvement Loop. As we all know, the purpose of measuring customer satisfaction is not to conduct a survey but to continually improve the companyâ&#x20AC;&#x2122;s ability to satisfy and retain its customers. Figure 7.4 illustrates the sequence of events that must occur before the next customer survey can be expected to show any improvement in customer satisfaction. It is therefore necessary to consider how long the Satisfaction Improvement Loop will take to unfold in the organisation. Unless it is very slow at making decisions or taking action it should not take as long as one year but monthly or quarterly surveys will be more appropriate for organisations that are capable of swift decision making and implementation of actions to improve customer satisfaction. FIGURE 7.4 The Satisfaction Improvement Loop Survey results Internal feedback
Survey
Customer attitude change Customers notice improvements
Customer Satisfaction
Decisions on actions
Implementation of actions Service improvement
Chapter seven
5/7/07
09:55
Page 107
Collecting the data
Of course, the big disadvantage of less frequent reporting of customer satisfaction is the longer delay before the success of satisfaction improvement initiatives can be evaluated. It also makes it much more difficult for organisations to keep employees focused on the importance of customer satisfaction. Periodic surveys are therefore most suitable for business-to-business companies that often have a small customer base and for organisations where the customer satisfaction improvement loop will be lengthy. KEY POINT Frequent reporting of customer satisfaction minimises the delay between taking action and seeing improvement, but periodic surveys may be more appropriate for B2B companies with a small customer base, or for organisations that are slow to implement change. 7.6.2 Timing The main point about timing, especially for annual surveys is that it should be consistent. Donâ&#x20AC;&#x2122;t survey in the summer one year and in the winter the following year. Any number of factors affecting the customer-supplier relationship could change across the seasons. Companies will be aware of significant seasonal events in their industry, e.g. annual price rises, and these potentially distorting factors should be avoided.
Conclusions 1. To conduct a customer satisfaction survey, most organisations will choose between a telephone survey and self-completion questionnaires, typically in the form of a postal survey or possibly a web survey. Of the two methods, selfcompletion surveys are cheaper but telephone surveys will usually provide more detail and greater reliability due to higher response rates. 2. Electronic surveys will often be inappropriate for customer satisfaction measurement in consumer markets due to unrepresentative samples. 3. In theory, methods of data collection can be mixed provided core questions occur at the beginning of the questionnaire and are asked consistently across all methods used. In practice however, one data collection method for the whole survey is preferable since it eliminates unnecessary variables. 4. Low response rates will render the results of a customer satisfaction survey meaningless, so telephone interviews are normally undertaken unless a good response rate (at least 30%) can be achieved. 5. In consumer markets, huge samples sometimes dictate the use of self-completion questionnaires, but in business markets, where sample sizes are usually relatively small, telephone interviews are typical. 6. To improve response rates, a good introductory letter is crucial and reminders very effective. Affordable incentives typically donâ&#x20AC;&#x2122;t work. 7. To guarantee honest and objective answers, respondent confidentiality must be
107
Chapter seven
108
5/7/07
09:55
Page 108
Collecting the data
offered. If respondents are happy to be named, a hot alert system will enable the organisation to address any specific instances of high dissatisfaction. 8. Continuous tracking with monthly or quarterly reporting provides quick feedback on service improvement initiatives and helps to keep the spotlight on customer satisfaction. Periodic surveys tend to be more appropriate for companies in B2B markets.
References 1. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, Prentice Hall / Financial Times, London 2. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing Environment”, Richard D Irwin Inc, Burr Ridge, Illinois 3. Ofcom (2006) "The Consumer Experience: Telecoms, Internet and Digital Broadcasting”, HMSO, London 4. Crimp, Margaret (1985) "The Marketing Research Process”, Prentice-Hall, London 5. Yu and Cooper (1983) "Quantitative Review of Research Design Effects on Response Rates to Questionnaires”, Journal of Marketing Research, (February) 6. Powers and Alderman (1982) "Feedback as an incentive for responding to a mail questionnaire”, Research in Higher Education 17 7. Schlegelmilch and Diamantopoulos (1991) "Prenotification and mail survey response rates: a quantitative integration of the literature”, Journal of the Market Research Society 33 (3) 8. Dillman, D A (1978) "Mail and telephone surveys: the Total Design Method”, John Wiley and Sons, New York 9. Peterson, Albaum and Kerin (1989) "A note on alternative contact strategies in mail surveys”, Journal of the Market Research Society 31 (3) 10. Sutton and Zeits (1992) "Multiple prior notification, personalisation and reminder surveys: do they have an effect on response rates?”, Marketing Research: A Magazine of Management and Applications 4(4) 11. Kanuk and Berenson (1975) "Mail Survey and Response Rates: a Literature Review”, Journal of Marketing Research (November) 12. Yammarino, Skinner and Childers (1991) "Understanding Mail Survey Response Behavior: a Meta-Analysis”, Public Opinion Quarterly 13. Brennan, M (1992) "The effect of a monetary incentive on mail survey response rates: new data”, Journal of the Market Research Society 34, 2 14. Dommeyer, C J (1988) "How form of the monetary incentive affects mail survey response”, Journal of the Market Research Society 30 (3) 15. James and Bolstein (1992) "Large monetary incentives and their effects on mail survey response rates”, Public Opinion Quarterly 56 (4) 16. Paolillo and Lorenzi (1984) "Monetary incentives and mail questionnaire response rates”, Journal of Advertising 13
Chapter seven
5/7/07
09:55
Page 109
Collecting the data
17. Furse and Stewart (1982) "Monetary incentives versus promised contribution to charity: new evidence on mail survey response”, Journal of Marketing Research 19 18. Kalafatis and Madden (1995) "The effect of discount coupons and gifts on mail survey response rates among high involvement respondents”, Journal of the Market Research Society 37(2) 19. Kotler, Philip (1986) "Marketing Management: Analysis, Planning and Control”, Prentice-Hall International, Englewood Cliffs, New Jersey 20. Freeman and Butler (1976) "Some sources of Interviewer Variance in Surveys”, Public Opinion Quarterly, (Spring) 21. Bailar, Bailey and Stevens (1977) "Measures of Interviewer Bias and Variance”, Journal of Marketing Research, (August) 22. Tull and Richards (1980) "What Can Be Done About Interviewer Bias?”, in "Research in Marketing” ed Sheth, J, JAI Press, Greenwich, Connecticut 23. Havice and Banks (1991) "Live and Automated Telephone Surveys: a Comparison of Human Interviewers and Automated Technique”, Journal of Marketing Research, (February) 24. Groves and Mathiowetz (1984) "Computer Assisted Telephone Interviewing: Effects on Interviewers and Respondents”, Public Opinion Quarterly 25. Kish, Leslie (1965) "Survey Sampling”, John Wiley and Sons, New York 26. Walker and Burdick (1977) "Advance Correspondence and Error in Mail Surveys”, Journal of Marketing Research, (August) 27. Szwarc,Paul (2005) "Researching Customer Satisfaction and Loyalty”,Kogan Page,London 28. Vavra, Terry (1997) "Improving your Measurement of Customer Satisfaction”, American Society for Quality, Milwaukee 29. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California
109
Chapter eight
110
5/7/07
09:55
Page 110
Keeping the score
CHAPTER EIGHT
Keeping the score If you want to measure anything you have to have a measuring device. Unfortunately, no measuring device for customer satisfaction has ever achieved the universal adoption of the Celsius scale, the speedometer or the 12 inch ruler. That’s partly because there isn’t a definitive answer to the question “which is the most suitable rating scale for market research?” However, much of the reason for the proliferation of rating scales used for CSM is due to practitioners’ ignorance about the characteristics of different scales and their advantages and disadvantages for customer satisfaction measurement. Since the choice of rating scale is one of the most contentious areas in customer satisfaction research, we will devote a complete chapter to the issue before we examine the final design of the main survey questionnaire.
At a glance In this chapter we will: a) Consider whether a precise measure or general customer feedback is more appropriate for monitoring customer satisfaction. b) Explain the difference between parametric and non-parametric statistics and their implications for CSM. c) Explore the relative merits of verbal and numerical scales. d) Outline the arguments for and against a mid-point. e) Discuss the suitability of expectation scales. f) Consider how many points should be included on a scale for measuring satisfaction. g) Recommend the most suitable scale for measuring and monitoring customer satisfaction and loyalty.
8.1 Why do you need a score? Before we start it’s worth considering why a customer satisfaction survey should result in a measure or score. Organisations could easily gather feedback from customers without worrying about a lot of time-consuming survey methodology and complicated statistical analysis. They could simply listen to customers, take note of
Chapter eight
5/7/07
09:55
Page 111
Keeping the score
the things they don’t like and fix the problems. Whilst the simplicity and relatively low cost of this approach is very appealing, it suffers from two fundamental problems: 1. Taking action Since organisations can’t address everything simultaneously they need to prioritise the allocation of resources. Without measures it would be impossible to make reliable decisions about the best areas to focus resources to improve customer satisfaction. 2. Judging success If customer satisfaction is a key indicator of business performance, trying to improve it without a yardstick for judging success would be like trying to improve profits without producing financial accounts. Therefore, the choice of rating scale for customer satisfaction measurement should be based on its suitability for achieving these two objectives rather than its validity for many other kinds of market research.
8.2 Parametric and non-parametric statistics Statisticians refer to two types of data – parametric and non-parametric. In parametric statistics the data is a measured quantity, such as a volume of liquid, the speed of a vehicle, the temperature or, in research, numerical scores generated by interval or ratio scales1. With parametric data you can draw bell curves (see Chapter 6), and a normal distribution is defined by two parameters, the mean and the standard deviation. Parametric, normally distributed data permit researchers to draw inferences about the extent to which a result from a sample applies to the whole population and can be analysed using multivariate statistics such as analysis of variance and covariance, regression analysis and factor analysis. In non-parametric statistics, the data is not a measurable quantity of something but a count or a ranking2, such as how many people have fair hair or black hair, how many times it was sunny or rainy or how many customers were satisfied or dissatisfied. Most types of non-parametric data are literally just counts of how many times something occurred, such as the number of days in the year that there was zero rainfall. Nonparametric statistics are analysed by counting up the number of incidences, such as how many customers ticked the ‘satisfied’ or ‘very satisfied’ boxes, and this is known as a frequency distribution. From non-parametric statistics you can draw conclusions about how many days it didn’t rain, or how many customers are satisfied, but not about the average rainfall or about the average level of satisfaction that your organisation is delivering. The statistical techniques used to analyse numbers can’t be employed with non-parametric statistics, which have to be analysed using counts and frequency distributions and tests such as chi-square.
111
Chapter eight
112
5/7/07
09:55
Page 112
Keeping the score
KEY POINT Data are either parametric or non-parametric and each must be analysed with appropriate statistical techniques. Data generated by verbal scales are nonparametric so have limited analysis possibilities such as counts and frequency distributions and cannot be treated as though they were numbers. Numerical scales produce parametric data that can be analysed using the non-parametric techniques plus a wide range of statistical techniques suitable for numbers.
8.3 Interval versus ordinal scales It is not unusual in satisfaction research to see simple verbal scales, where each point on the scale is given a verbal description (e.g. ‘strongly agree’, ‘agree’ or ‘very satisfied’, ‘satisfied’ etc). These are illustrated in Figures 8.1 and 8.2. The problem is that such scales have only ordinal properties. They give an order from good to bad or satisfied to dissatisfied without quantifying it. In other words, we know that ‘strongly agree’ is better than ‘agree’ but we don’t know by how much. Nor do we know if the distance FIGURE 8.1 Verbal scale Below are some features of visiting _______ Dental Practice. Please place an ‘X’ in the box which most accurately reflects how satisfied or dissatisfied you are with each item or put an ‘X’ in the N/A box if it is not relevant to you N/A
Very dissatisfied
Quite dissatisfied
Neither satisfied nor dissatisfied
Quite satisfied
Very satisfied
1. Helpfulness of reception staff 2. Location of the surgery 3. Cost of the dental treatment
FIGURE 8.2 Likert scale Below are some features of eating out at _______. Please place an’X’ in the box which most accurately reflects how much you agree or disagree with the statement or in the N/A box if it is not relevant to you. N/A
Disagree strongly
Disagree
Neither agree nor disagree
Agree
Agree strongly
1. The restaurant was clean 2. The service was quick 3. The food was high quality
between ‘strongly agree’ and ‘agree’ is the same as the distance between ‘agree’ and ‘neither agree nor disagree’. This is why verbal scales have to be analysed using a
Chapter eight
5/7/07
09:55
Page 113
Keeping the score
frequency distribution, which simply involves counting how many respondents ticked each box. This means that data from verbal scales can be manipulated only with ‘nonparametric statistics’, based on the counts of responses in different categories. According to Allen and Rao, “The use of ordinal scales in customer satisfaction measurement should be discouraged. It is meaningless to calculate any of the fundamental distributional metrics so familiar to customer satisfaction researchers. The average and standard deviation, for example, are highly suspect. Similarly, most multivariate statistical methods make assumptions that preclude the use of data measured on an ordinal scale.”3. Without mean scores for importance and satisfaction it is not possible to calculate a weighted customer satisfaction index, which is the most accurate type of headline measure for monitoring success nor ‘satisfaction gaps’ – a huge handicap for the actionability of customer satisfaction surveys. (For details see Chapters 11 and 12 respectively). The Likert scale poses some additional problems for satisfaction research. Developed by Rensis Likert in 19702, it has proved to be very useful for exploring people’s social, political and psychological attitudes. Likert scales work best with bold statements, like those shown in Figure 8.2, rather than neutral ones4, but this introduces an element of bias. To minimise the so-called ‘acquiescence bias’ (people’s tendency to agree with a series of statements), the list of statements should be equally divided between favourable and unfavourable statements5, so that respondents with a particular attitude would find themselves sometimes agreeing and sometimes disagreeing with the statements. In practice, this tends to be a problem for CSM because organisations are very reluctant to use strong negative statements (e.g. “the restaurant was filthy, the service was very slow,……. agree / disagree”). Consequently, satisfaction surveys using Likert scales tend to suffer from a very high degree of acquiescence bias. KEY POINT Likert scales tend to suffer from acquiescence bias when used for satisfaction surveys unless around half of the statements are negatively biased, which tends to be politically unacceptable. Shown in Figures 8.3 and 8.4, interval scales use numbers to distinguish the points on the scale. They are suitable for most statistical techniques because they do permit valid inferences concerning the distance between the scale points. For example, we know that the distance between points 1 and 2 is the same as that between points 3 and 4, 4 and 5 etc. Consequently, data from interval scales are assumed to follow a normal distribution, (see Chapter 6), so they can be analysed using ‘parametric statistics’. This permits the use of means and standard deviations, the calculation of indices and the application of advanced multivariate statistical techniques to
113
Chapter eight
114
5/7/07
09:55
Page 114
Keeping the score
establish the relationships between variables in the data set – an essential prerequisite for understanding things like the drivers of satisfaction and loyalty. (See Chapter 10 and 14 for further explanation of these analytical points). For a scale to have interval properties it is important that only the end points are labelled3; the labels (e.g. Very satisfied…….Very dissatisfied) simply serving as anchors to denote which end of the scale is good / bad, agree / disagree etc. FIGURE 8.3 5-point numerical scale Below are some features of shopping at XYZ. Using the scale below where 5 means “completely satisfied” and 1 means “completely dissatisfied” please circle the number that most accurately reflects how satisfied or dissatisfied you are with XYZ or circle N/A it is not relevant to you. N/A
1
2
3
5
4
Completely dissatisfied
Completely satisfied
1. Cleanliness of store
N/A
1
2
3
4
5
2. Layout of store
N/A
1
2
3
4
5
3. Helpfulness of staff
N/A
1
2
3
4
5
FIGURE 8.4 10-point numerical scale Below are some features of shopping at XYZ. Using the scale below where 10 means “completely satisfied” and 1 means “completely dissatisfied” please circle the number that most accurately reflects how satisfied or dissatisfied you are with XYZ or circle N/A if it is not relevant to you. N/A
1
2
3
4
5
6
7
8
9
Completely dissatisfied
10 Completely satisfied
1. Ease of parking
N/A
1
2
3
4
5
6
7
8
9
10
2. Choice of merchandise
N/A
1
2
3
4
5
6
7
8
9
10
3. Queue times at checkout
N/A
1
2
3
4
5
6
7
8
9
10
8.4 The meaning of words Some people subjectively prefer verbal scales because they feel that they understand the meaning of each itemised point on the scale, and at the individual level it is true that each person will assign a meaning that they understand to each point on the scale. The same people would often say that, by contrast, a numerical score doesn’t appear to have a specific meaning – does one person’s score of 7/10 refer to the same level of performance as another person’s score of 7. It is true that the word ‘satisfied’ has more verbal meaning than a score of 8/10. However, whilst individuals will ascribe a meaning to ‘satisfied’ that they are personally happy with, the problem is that it often doesn’t have exactly the same
Chapter eight
5/7/07
09:55
Page 115
Keeping the score
meaning to everyone. It is certainly undeniable that numbers, such as a score of 8/10, don’t have a meaning that people can readily interpret into words, but the fact that numbers do not have a meaning is their big strength for measuring. It’s why numbers were invented. Imagine if the early traders had to use a verbal scale such as ‘fair’, ‘very fair’ or ‘satisfied’, ‘very satisfied’ to judge the amount of wheat that should be traded for a horse. Numbers make it possible for people to understand measures because they know not only that 4 kilos is heavier than 3 kilos, but how much heavier. The same logic applies to satisfaction measurement. If a random sample of customers gives a set of numerical scores for satisfaction this year, and next year another random sample gives a slightly higher set of numbers, we know they are more satisfied, and by how much. Moreover, provided the sample is large enough we can be sure within a very narrow margin of error that the higher level of satisfaction applies to the whole customer base. We may not be able to give a verbal meaning to a satisfaction index of 83.4% or a score of 7.65 for ‘ease of doing business’, but they will be truly accurate and comparable measures of the organisation’s success in delivering satisfaction from one year to the next. Moreover, since people interpret words in a variety of ways, it would be pointless to attempt to apply a verbal description to the scores achieved. (In practice, benchmarking is the way to achieve this, as explained in Chapter 12). KEY POINT Numbers provide the most objective and unambiguous basis for monitoring changes in customer satisfaction. There are two particular types of CSM survey where numerical scales are much better than verbal scales for ease of completion. Firstly, numerical scales work much better for interviewing since respondents simply have to focus on giving a score, out of 10 for example, rather than struggling to remember the different points on the verbal scale, and anybody who has ever tried to interview customers using a verbal scale will know exactly how difficult it is, especially on the telephone. Secondly, for international surveys the problem of consistent interpretation of verbal scales is hugely exacerbated by language and cultural differences. Any company conducting international research would be extremely unwise to consider anything other than a numerical rating scale. KEY POINT Numerical scales are much more suitable than verbal scales for telephone interviews and for international surveys.
8.5 The mid-point Amateur researchers tend to worry about the mid-point on a rating scale, often believing that the mere existence of a mid-point encourages everyone to use it as an easy option.
115
Chapter eight
116
5/7/07
09:55
Page 116
Keeping the score
Evidence from CSM research totally contradicts this popular myth. As suggested in Chapter 5, the main difficulty when measuring importance is people’s tendency to score the higher points on the scale. As far as satisfaction is concerned, it is well established that customer satisfaction data is almost always positively skewed (see Section 8.6). However, the evidence from thousands of customer satisfaction surveys conducted by The Leadership Factor is that respondents score it how they see it, with relatively few going for the middle of the scale unless the organisation is a very mediocre performer. In theory one should include a mid-point on the scale, since it is poor research to force respondents to express an opinion they don’t hold. They may genuinely be neither satisfied nor dissatisfied. However, we would have few concerns whether a scale had a mid-point or not. If you feel happier with 4 or 6 points rather than 5 or 7, we don’t believe it will make much difference, though bear in mind that since people don’t target the mid-point, there won’t be any detriment to including it either. Interestingly, a 10point scale doesn’t have a mid-point, although this is academic since tests show that above 7 points, respondents typically don’t focus on where the mid-point is.
8.6 Aggregating data from verbal scales Since it is not statistically acceptable to convert the points on a verbal scale into numbers and generate a mean score from those numbers, the normal method of analysing verbal scales is a frequency distribution (see Chapter 10). This leads organisations to report verbal scales on the basis of “percentage satisfied” (i.e. those ticking the boxes above the mid point). As shown in Figure 8.5, this often masks changes in customer satisfaction caused by the mix of scores within the ‘satisfied’ and ‘dissatisfied’ categories. In fact, if results are reported in this way there is little point having more than 3 points on the scale – satisfied, dissatisfied and a mid point. By contrast, the mean score from a numerical scale will use data from all points on the scale so will reflect changes from any part of the spectrum of customer opinion. FIGURE 8.5 Aggregating data from verbal scales Cleanliness of the restaurant
15%
15%
20%
Cleanliness of the restaurant
15%
15%
20%
Quality of the food
10%
Quality of the food
15% 25%
25% 15%
10%
40% 10%
10%
Percentage satisfied 50% 50%
40% 25%
25%
50%
25%
25%
50%
Key Very Dissatisfied Neither satisfied nor dissatisfied dissatisfied
Satisfied
Very satisfied
Chapter eight
5/7/07
09:55
Page 117
Keeping the score
KEY POINT Satisfaction measures from verbal scales do not use all the data so provide a poor basis for monitoring changes in customer satisfaction.
8.7 Expectation scales Some organisations use expectation scales in an attempt to measure the extent to which customers’ requirements have been met. (See Figures 8.6 and 8.7). Whilst these scales have some intuitive appeal, they suffer from three serious drawbacks for CSM. The first is that, like any verbal-type scale they have only ordinal properties so suffer from all the analytical limitations outlined above. A much bigger drawback, however, is their unsuitability as a benchmark for judging the organisation’s success. As pointed out by Grapentine6, if the measure changes in future is it because the company’s performance has improved or deteriorated or is it down to changes in customers’ expectations? In the same article, Grapentine also highlights the third problem with expectation scales for measuring customer satisfaction. For many ‘givens’, such as ‘cleanliness of the restaurant’, ‘accuracy of billing’ or ‘safety of the aeroplane’, customers never score above the mid-point. Whatever the level of investment or effort required to achieve them, clean restaurants, bills without mistakes and aeroplanes that don’t crash will never do more than meet the customer’s expectations. KEY POINT Expectation scales are not suitable for measuring and monitoring customer satisfaction. FIGURE 8.6 A 5-point expectations scale Please comment on how the service you received compared with your expectations by ticking one box on each line. Please tick the N/A box if it is not relevant to you Much better
Better
As expected
Worse
Much worse
N/A
Helpfulness of staff Friendliness of staff Cleanliness of the restaurant
FIGURE 8.7 A 3-point expectations scale Please comment on how the service you received compared with your expectations by ticking one box on each line. Please tick the N/A box if it is not relevant to you. Exceeded my expectations
Cleanliness of the toilets Waiting time for your table Waiting time for your meal
Met my expectations
Did not meet my expectations
N/A
117
Chapter eight
118
5/7/07
09:55
Page 118
Keeping the score
8.8 Number of points It is not practical to have many points on a verbal scale. 5-point verbal scales, like those shown in Figures 8.1 and 8.2 are the norm. This is a considerable disadvantage since the differences between satisfaction survey results from one period to the next will often be very small. As we have already mentioned in Section 8.5, one of the characteristics of CSM data is that it tends to be skewed towards the high end of the scale. This merely reflects the fact that companies generally perform well enough to make most customers broadly satisfied rather than dissatisfied. (It is interesting to note that scores from situations where high levels of dissatisfaction do exist, typically when customers have no choice, do exhibit a much more normal distribution). What most companies are mainly measuring therefore is degrees of satisfaction and since they are tracking small changes in that zone, it becomes very important to have sufficient discrimination at the satisfied end of the scale, and, for analytical purposes, a good distribution of scores – and this is the big problem with five point scales. The problem is exacerbated by a tendency amongst some people to avoid the extremes of the scale. Even if we’re mainly measuring degrees of satisfaction, this isn’t a major problem on a 10-point scale because there are still 4 options (6,7,8 and 9) for the respondent who is reluctant to score the top box. With at least four choices it is therefore quite feasible for customers to use the 10-point scale to acknowledge relatively small changes in a supplier’s performance. By contrast, it’s a big problem on a 5-point scale because for anyone reluctant to use the extremes of a scale there’s only one place for the satisfied customer to go – and because so many people go there it has become known as the ‘courtesy 4’! This often results in a narrow distribution of data with insufficient discrimination to monitor fine changes in a supplier’s performance, so the slow, small improvements in satisfaction that one normally sees in the real world will often be undetected by CSM surveys using 5-point scales. Consequently, whilst one can debate the rights and wrongs of different scales from a pure research point of view, the disadvantages from a practical business management perspective are obvious. Often it will lead to disillusionment amongst staff with the customer satisfaction process on the grounds that “whatever we do it makes no difference, so it’s pointless trying to improve customer satisfaction”. It is therefore essential from a business perspective to have a CSM methodology that is discriminating enough to detect any changes in customer satisfaction, however small. As well as being more suitable for tracking small changes over time, scales with more points discriminate better between top and poor performers so tend to have greater utility for management decision making in situations where a company has multiple stores, branches or business units.
Chapter eight
5/7/07
09:55
Page 119
Keeping the score
As illustrated by the charts in Figure 8.8, whilst both 5 and 10 point scales exhibit a skewed distribution, data from the 10 point scale are more normally distributed and show more variance. FIGURE 8.8 Distribution of data across scales 10 point scale 30
50
25
40
20
Percentage
Percentage
5 point scale 60
30 20
10 5
10 0
15
0 1
2
3
4
5
1
2
3
4
5
6
7
8
9 10
It should also be noted that variance can be further improved on numerical scales by increasing the bi-polarity of the anchored end points. A 10 point scale with end points labelled ‘dissatisfied’ and ‘satisfied’ would generate a less normal distribution than end points labelled ‘very dissatisfied’ and ‘very satisfied’. Even better would be end points labelled ‘completely dissatisfied’ and ‘completely satisfied’. KEY POINT To maximise variance, the end-points of 10-point scales should be labelled ‘completely satisfied’ and ‘completely dissatisfied’. From a technical research point of view there is a compelling argument for the 10point scale because it is easier to establish ‘covariance’ between two variables with greater dispersion (i.e. variance around their means). Covariance is critical to the development of robust multivariate dependence models such as identifying the drivers of customer loyalty, or establishing the relationship between employee satisfaction and customer satisfaction. In fact, many sophisticated statistical modelling packages assume that data are only ordinal if scales have fewer than 6 points. In the light of the above arguments, it would be valid to ask the question, ‘why stop at 10 points?’ From a data point of view it would be better to have even more points. Federal Express uses a 100 point scale to track ‘micro-movements’ in customer satisfaction in its frequent measures. 20 point scales have also been used. However, questions must be easy for respondents to understand in order to have a high level of
119
Chapter eight
120
5/7/07
09:55
Page 120
Keeping the score
confidence in the validity of the answers. People find it most easy to respond to 5 point verbal scales and 10 point numerical scales. This may be because giving (or receiving) a score out of 10 tends to be familiar to most people â&#x20AC;&#x201C; whether it be from tests at school or from the reviews of footballers in newspapers. Numerical scales with fewer or more than 10 points are more difficult for people as are verbal scales with more than 5 points. Following a test at Cornell University in 1994, Wittink and Bayer concluded that the 10-point endpoint anchored numerical scale was most suitable for customer satisfaction measurement. Their reasons included respondent perspective issues such as simplicity and understandability as well as reliability issues such as repeatability (the extent to which the same scores are given by respondents in successive tests). Most importantly they concluded that it was the best scale for detecting changes over time and for improving customer satisfaction7. KEY POINT 10-point numerical scales are most suitable for measuring and monitoring customer satisfaction. Michael Johnson from Michigan University Business School and Anders Gustafson form the Service Research Centre at Karlstad University (the originators of the American Customer Satisfaction Index and Swedish Customer Satisfaction Barometer) are in no doubt that a 10-point numerical scale should be used for customer satisfaction measurement8.
8.9 The danger of over-stating customer satisfaction Due to the two problems outlined above (narrower distribution of scores and aggregation of data), verbal scales invariably generate higher customer satisfaction scores than numerical scales, tempting organisations to adopt a dangerous level of complacency regarding their success in satisfying customers. According to Allen and Rao3, a company that routinely scores 90% for overall customer satisfaction on a 5-point scale will typically score 85% on a 7-point scale and only 75% on a 10-point scale. This is corroborated by our own experience when we have to convert an aggregated customer satisfaction index from a verbal scale into the weighted customer satisfaction index described in Chapter 11 and generated by a 10-point numerical scale. This is done by asking some questions twice to the same sample in the same survey, using first one scale then the other. Usually they would be at different ends of the questionnaire, separated by intervening questions, and by interview rather than self-completion questionnaire to minimise the risk that the answers provided for the first scale will influence the scores given for the second scale.
Chapter eight
5/7/07
09:55
Page 121
Keeping the score
An alternative, but more costly approach is to ask identical questions in separate, simultaneous surveys with random and representative samples of at least 200 customers using the 5-point scale with one group and the 10-point scale with the other. Self-completion questionnaires would be acceptable for this latter approach. Figure 8.9 illustrates the results from five such tests showing how the headline measure of customer satisfaction from the two different scales compared: FIGURE 8.9 Satisfaction levels across scales
Example 1 Example 2 Example 3 Example 4 Example 5
% “satisfied” 5 point verbal scale
Satisfaction Index 10 point numerical scale
78.6% 84.5% 87.6% 90.3% 92.3%
65.3% 67.4% 70.9% 74.4% 75.8%
This can lead to a dangerous level of complacency. In Example 5, the 92.3% produced by the 5 point verbal scale suggests that the company is doing very well at satisfying its customers. In fact, plugging its customer satisfaction index of 75.8% into The Leadership Factor’s benchmarking database demonstrated that it was in the bottom half of the league table in its ability to satisfy customers! It is therefore hardly surprising that companies misleading themselves with unrealistically high levels of customer satisfaction from verbal scales complain that their ‘satisfied’ customers are often defecting. They then begin to question the point of customer satisfaction. What they should be questioning is their misleading CSM process. Their customers are actually well below the levels of satisfaction that would guarantee loyalty. KEY POINT 5-point verbal scales dangerously over-estimate customer satisfaction. The risk of over-estimating customer satisfaction by monitoring the top two boxes on 5-point scales was demonstrated by AT&T9 who were regularly getting headline satisfaction measures of over 90% and bonusing staff on it, but in 1997 had doubts when some businesses began making major losses despite these apparently high levels of customer satisfaction. On investigation, they discovered that repeat purchase rates were substantially different for customers rating “excellent” compared with those rating “good”. They also found that overall satisfaction scores of 95%+ were correlated with scores in the low 80s on their measure of “worth what paid for”. A huge amount of evidence collected over a 30 year period by Harvard Business School also supports this view. They have found very strong correlations between customer satisfaction and loyalty, but only at high levels of satisfaction10. Merely being satisfied isn’t enough in today’s competitive markets and a tougher measure
121
Chapter eight
09:55
Page 122
Keeping the score
based on a 10-point numerical scale is necessary to highlight this. FIGURE 8.10 Satisfaction-Loyalty relationship10 Apostle
100% Zone of affection 80% Loyalty
122
5/7/07
Zone of indifference 60% 40% Zone of defection 20%
Saboteur
Satisfaction 1
5
10
8.10 Top performers and poor performers For poor performers with low levels of satisfaction, the choice of rating scale matters little. They will get a fairly normal distribution with 5, 7 or 10 point scales and have little need for advanced analysis of the data since the problem areas that need addressing will be obvious. By contrast, choice of scale becomes much more critical for top performing companies for several reasons: 1. Companies with high levels of satisfaction need a very tough measure if they are to identify further opportunity for improvement. 2. Companies in this situation need to employ much more sophisticated statistical techniques that drill down into the data to uncover drivers of satisfaction or differences in satisfaction between groups of customers that may not previously have been considered. 3. In situations where there are multiple business units (e.g. branches, regions, stores, sites etc) it is very important to be able to discriminate between the better and poorer performing units. For all the above reasons the greater variance yielded by 10 point numerical scales and the ability to employ advanced multivariate statistical techniques with good levels of predictive and explanatory power are extremely beneficial to high performing companies. Of course, poor performing organisations aiming to improve would be well advised to use a scale that will also be suitable when they achieve their objective. When General Motors, who were one of the pioneers of customer satisfaction
Chapter eight
5/7/07
09:55
Page 123
Keeping the score
research, began to suspect that their CSM process, (based on a balanced 5-point verbal scale) was not providing a sound basis for decision making they analysed 10 years of back data involving over 100,000 customer responses9. It demonstrated that the relationship between customer satisfaction and loyalty was not linear but displayed the characteristics shown in Figure 8.10, with loyalty declining very rapidly if satisfaction scored anything lower than ‘very satisfied’. This led them to draw two conclusions. First, that only a ‘top box’ score represented an adequate level of business performance and second that having only one point on the scale that covered the whole range of performance from adequate upwards was clearly useless. Their solution to the latter problem was to invent a positively biased scale, to provide the basis for moving customers from satisfied to delighted: Delighted: ‘I received everything I expected and more’ Totally satisfied: ‘Everything lived up to my expectations’ Very satisfied: ‘Almost everything lived up to my expectations’ Satisfied: ‘Most things lived up to my expectations’ Not satisfied: ‘My expectations were not met’ Although the four levels of satisfaction are much more useful than the two offered by a balanced verbal scale, this is matched by the 10-point scale, which also has all the other analytical advantages of numerical over verbal scales. The ‘top box’ concept, however, is interesting especially for high performing companies seeking to move ‘from good to great’. On a 10-point numerical scale it is generally considered that 9 is the loyalty threshold, so monitoring ‘top box’ scores (i.e. 9s and 10s) as well as an overall customer satisfaction index can be very useful.
8.11 Up-to-date best practice The questions on the ACSI are rated on a 10-point numerical scale. According to the University of Michigan this is for two main reasons. First because 10 points are needed to provide the required level of discrimination at the satisfied end of the scale (and a ten point verbal scale is not workable, especially for telephone interviews) and secondly because of the analytical benefits afforded by numerical scales11,12.
Conclusions 1. 5-point verbal and 10-point numerical scales are both easy for customers to complete. 2. Verbal and numerical scales differ massively in terms of the types of statistical techniques that are permissible, verbal scales possessing very limited analytical power. 3. Scales with more points generate more variance so are better for tracking satisfaction over time, for discriminating between high and low performers and for using sophisticated statistical techniques. 4. Scales with fewer points produce higher satisfaction results leading to complacency within the organisation and misunderstanding about why apparently ‘satisfied’ customers are defecting.
123
Chapter eight
124
5/7/07
09:55
Page 124
Keeping the score
A 2007 article by Coelho and Esteves13 tested the difference between 5 and 10 point scales specifically for customer satisfaction surveys. They concluded that the 10 point scale should be used because it was much better for analysis since the 5 point scale produces a higher concentration of responses in the mid-range. They also disproved the popular myth that 5 point scales are easier for respondents, showing no difference in non-response between the scales and no difference on ease of completion across age groups or education levels. 5. For organisations that want to give themselves the best chance of improving customer satisfaction as well as being able to reliably judge their success in achieving that goal, the 10-point numerical scale is the only suitable option for customer satisfaction measurement. 6. For that reason, it is the scale used by the University of Michigan for the American Customer Satisfaction Index.
References: 1. Norman and Streiner (1999) "PDQ Statistics”, BC Decker Inc, Hamilton, Ontario 2. Likert, Rensis (1970) "A Technique for the Measurement of Attitudes”, in Summers, G F (ed) "Attitude Measurement”, Rand McNally, Chicago 3. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ Quality Press, Milwaukee 4. Oppenheim, A N (1992) "Questionnaire Design, Interviewing and Attitude Measurement”, Pinter Publishers, London 5. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing Environment”, Richard D Irwin Inc, Burr Ridge, Illinois 6. Grapentine, T (1994) "Problematic scales”, Marketing Research 6, 8-13, (Fall) 7. Wittink and Bayer (1994) "Statistical analysis of customer satisfaction data: results from a natural experiment with measurement scales”, Working paper 9404, Cornell University Johnson Graduate School of Management 8. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California 9. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty Measurement” 3rd Edition, Gower, Aldershot 10. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 11. Ryan, Buzas, and Ramaswamy (1995) "Making Customer Satisfaction Measurement a Power Tool", Marketing Research 7 11-16, (Summer) 12. Fornell, Claes (1995) "The Quality of Economic Output: Empirical Generalizations About Its Distribution and Association to Market Share", Marketing Science, 14, (Summer) 13. Coelho and Esteves (2007) “The choice between a five-point and a ten-point scale in the framework of customer satisfaction measurement”, International Journal of Market Research 49, 3
Chapter nine
5/7/07
09:56
Page 125
The questionnaire
CHAPTER NINE
The questionnaire By the time the questionnaire design stage is reached, much of its content will already be determined – chiefly the list of customer requirements identified by the exploratory research. Two other factors that will heavily influence the final design of the questionnaire will also have been decided by now – firstly, what type of survey to administer and secondly the use of a 10-point scale. Of course, the survey instrument used to collect the data must generate reliable information and this can be compromised by many elements of the final questionnaire design process.
At a glance In this chapter we will: a) Review the importance of asking the right questions. b) Consider how the questionnaire should be presented to customers. c) Describe the types of question that can be used in surveys. d) Specify the sections to be included in the questionnaire. e) Explain how to score requirements for satisfaction and importance and how to probe low satisfaction for more insight. f) Describe and evaluate questions for measuring loyalty. g) Discuss the length of the questionnaire. h) Review question wording, especially the avoidance of common mistakes. i) Explain how to close the questionnaire. j) Consider the need for piloting.
9.1 The right questions As we know, customer satisfaction is a measure of the extent to which the organisation has met customers’ requirements and this has two implications for questionnaire design. 9.1.1 Meeting customers’ requirements The first implication was covered in Chapter 4, where we established that in order to measure whether customers’ requirements are being met, the questions asked must
125
Chapter nine
126
5/7/07
09:56
Page 126
The questionnaire
focus on customers’ main priorities. These are established through exploratory research using the ‘lens of the customer’1. If the ‘lens of the organisation’ is used, and management asks questions on the topics they wish to cover, the survey will not provide a measure of whether customers’ requirements have been met. Compared with most types of market research therefore, this makes questionnaire design a relatively straight forward exercise for CSM. It is not necessary or even desirable to consult other staff in the organisation to find out what they would like to see on the questionnaire since its core contents will already have been determined by the exploratory research. The second implication is that the questionnaire must cover both sides of the equation in our definition of customer satisfaction – importance and satisfaction2,3,4, otherwise the relative importance of customers’ requirements would never be reliably understood. Measures of importance and satisfaction are both necessary to achieve the two main outcomes described in chapters 11 and 12. The core content of the questionnaire will therefore comprise the list of the customers’ main requirements rated for both importance and satisfaction. There is no scope for debate over this if the results of the survey are to provide a measure that accurately reflects how satisfied or dissatisfied customers feel. 9.1.2 Customers’ perceptions The information collected and monitored by a CSM process will be customers’ perception of the extent to which the organisation has met their requirements. It will not necessarily be an accurate reflection of the organisation’s real performance. As we said earlier in this book, customers’ perceptions are not always fair or accurate, but they are the information on which customers base their future behaviours, such as buying again and recommending. It is therefore an accurate understanding of customers’ perceptions that is the most useful measure for the organisation to monitor. This means that the questionnaire should focus on eliciting customers’ genuine opinions and should definitely not attempt to lead them, by providing information about the organisation’s actual performance for example.
9.2 Introducing the questionnaire An important objective of questionnaire design is to ensure that respondents relate to the questionnaire. According to McGivern5, the questionnaire should be seen as “a sort of conversation, one in which the respondent is a willing, interested and able participant”. This point applies particularly to interviews but should also be a guiding principle in the design of user-friendly self-completion questionnaires. As with any interaction with another person, the start of the process is especially important, hence the emphasis placed on the introductory letter in Chapter 7. This
Chapter nine
5/7/07
09:56
Page 127
The questionnaire
will have been sent before customers are approached for an interview and, ideally, before a self-completion questionnaire is received. The beginning of the questionnaire or interview script will therefore have to repeat the main points of the introductory letter and should cover the following points: 1. A reference to the introductory letter to remind the customer of its contents and to emphasise the authenticity of the research. 2. If necessary, any qualification to make sure that the respondent concerned will be able to answer the questions. 3. The fact that the survey is confidential and that the respondent’s anonymity will be protected, introducing the name of any third party agency at this point if appropriate. It can also be helpful to specify any relevant code of conduct such as that of the Market Research Society in the UK. It is very important to convey the message that this is not selling and that the exercise will be beneficial as well as relevant to customers. 4. To further emphasise the credibility of the exercise, it is especially useful to be able to mention at this point if feedback will be provided to respondents after the survey. 5. How long it will take. If an interview, respondents should be asked if they would like to make an appointment to do the interview at a more convenient time. 6. To maximise participation in interviews, it is crucial to strike the right balance between friendliness and professionalism. Clearly, an interviewer should not appear unfriendly, but a professional, even authoritative tone will help to maximise participation rates.
9.3 Types of question Before considering specific questions it is helpful to consider the basic types of question that can be asked in surveys. 9.3.1 Open questions Open or free-response questions allow respondents to say or write anything they wish in answer to the question. Since they tend to minimise the risk of the researcher leading the respondent they should elicit an answer that is the closest possible to the customer’s real feelings. This strength is also their biggest weakness, as they often generate a huge volume of information that can be difficult to analyse or use. According to Oppenheim6, “free response questions are often easy to ask, difficult to answer and still more difficult to analyse.” The most common use of open questions is probing low scores to satisfaction and loyalty questions to understand customers’ negative feelings about the organisation. It is also possible to probe top box scores to understand what leads to positive feelings in customers. Since probing demands more time from the customer and is extremely
127
Chapter nine
128
5/7/07
09:56
Page 128
The questionnaire
time consuming at the analysis and implementation stages if comments are to be utilised effectively, it is wise to be selective with probing. If forced to choose, most organisations would think it more useful to reduce dissatisfaction (and hence defections) than to boost customer delight, so it is normal practice to probe low satisfaction scores rather than high ones. An exception would be the small percentage of organisations with exceptionally high levels of customer satisfaction whose objective will be move satisfied customers scoring 8s and 9s into highly satisfied ones scoring 10s. These companies do need to understand exactly what has produced the highest levels of satisfaction in some customers. Most organisations, however, will have significant levels of dissatisfaction, and reducing this should be their first priority. KEY POINT Understanding reasons behind dissatisfaction is the most productive use of the limited opportunity for open questions in a CSM main survey. There will also be other opportunities for open questioning in CSM main surveys, particularly in B2B, where the survey is more likely to be administered by interview rather than self-completion, and where customers will often have some very knowledgeable views which they will be quite interested in sharing if given sufficient encouragement. A good example of this use of an open question is: “Imagine you were Chief Executive of XYZ. What is the most important change you would make to the company (a) in the short term, and (b) in the long term?” When interviewing senior managers in a B2B market, a question like this can stimulate some very insightful responses, which could be extremely valuable to the company concerned. If necessary the interviewer would probe further, asking the respondent to explain his reason or to provide more detail. 9.3.2 Closed questions Closed questions are quick, low cost, easy for respondents and interviewers and facilitate clear, unambiguous comparisons7. Since the main purpose of customer satisfaction surveys is to generate and monitor measures, most questions will be closed. This means that respondents have a limited number of prescribed options for their answer. For CSM the response mechanism for most questions will be the rating scale used to measure satisfaction and importance. However, many other types of closed question are possible. Closed questions can be attitudinal or factual and can have any number of response options in a wide variety of formats. In addition to the rating scales examined in Chapter 7, other types of closed question relevant to CSM include any of the following. (a) Dichotomous question A dichotomous question will have only two possible answers, usually ‘yes’ or ‘no’. “Have you flown business class within the last three months?”
Chapter nine
5/7/07
09:56
Page 129
The questionnaire
They are often used in CSM for qualifying respondents so that one or more subsequent questions will be asked only to customers with valid experience. Dichotomous questions can also be very useful for understanding the antecedents of satisfaction by forcing customers into two mutually exclusive tracks of the customer journey, for example: “On your last visit to the supermarket were you greeted by a member of staff as you entered the store?” (b) Multiple choice question Multiple choice questions are commonly used in CSM to place customers into categories. They could be demographic categories such as: “Which of the following age groups are you in?” Under 25 / 25-44 / 45-64 / 65 or over Tick one box only.” As well as covering the range it is essential that the categories do not overlap. Sometimes, as in the question above, a person cannot be in more than one of the categories so should be instructed to give no more than one answer. For other questions, however, respondents could legitimately fall into more than one category so should be allowed to give more than one answer, for example: “Which of the following methods do you use when you need information about XYZ’s products or services? Tick any options that apply. XYZ web site / Customer Service Department by email / Customer Service Department by phone / Your sales representative by email / Your sales representative by phone / Your sales representative in person / Other method / Have not needed information” Where there are very many possible answers, but only the most common are listed, it becomes necessary to add an option such as “other” or “none of the above” and, where sensible, a “don’t know” option. 9.3.3 Open question – closed response Sometimes it can be very helpful to try to secure the main advantages of both open and closed questioning by adopting the open question – closed response format. Only possible if customers are interviewed, the question is asked as an open question, allowing the customer to give any response. However, there is a response scale that is clearly visible to the interviewer, so if the customer gives a response that fits the scale, the interviewer ticks the appropriate box or boxes. If not, they can probe, typically using the scale or part of it to make certain which response category is correct. Alternatively, but less commonly, the interviewer can be instructed to write in full any customer comments that do not fit the scale. The open question – closed response approach can be very useful when a list of multiple choice options is very long, making it time consuming for the interviewer to read and very tedious for the respondents. A good example would be an ethnic origin question where the list of
129
Chapter nine
130
5/7/07
09:56
Page 130
The questionnaire
options, especially the kind of list favoured by the public sector, can be extremely long. Asking the customer an open question will usually generate a response that fits exactly into one of the categories. If it doesn’t, the customer can be asked to give more detail or, as a final resort, several or all of the possible responses can be read out. KEY POINT In interviews, use of the open question - closed response technique will often make the questionnaire less tedious for customers and quicker to administer. Another good use of the technique is to identify top of the mind feelings or perceptions, such as awareness of products or services, where customers might be asked: “What products (and/or services as relevant) does XYZ provide?” For some organisations the interviewer might have a long list of products with two response options for each one. First, a box for unprompted awareness to cover all the responses generated by the open question. The interviewer would then read down the remaining products that had not been mentioned, ticking the prompted awareness option for any that the respondent was aware of but had not previously mentioned. This information would be very useful to XYZ as it is only products with high unprompted awareness that customers are likely to make enquiries about.
9.4 The structure of the questionnaire Before wording the questions it is necessary to plan the overall layout of the questionnaire. The sections required for a CSM questionnaire and the order in which they should appear are summarised below. 9.4.1 Introduction Whether a self-completion questionnaire or a questionnaire for an interview, it must begin with an introduction. As explained in 9.2, the first objective of the introduction will be aimed at making sure customers do take part in the interview or complete the questionnaire. However, since CSM is about measuring, there will also need to be some technical instructions about the rating scale, which should appear straight after the introduction and just before satisfaction and importance are scored. Although giving scores on a 10-point scale is very easy for people, the scale should be explained, particularly labelling the end points so there is no misunderstanding about which is the high scoring end and which is the low one. The introductory wording for satisfaction and importance can therefore be clear but short and simple, such as: “I would now like you to score a list of factors for how satisfied or dissatisfied you are with XYZ’s performance on each one, using a scale of 1 to 10, where 1 means completely dissatisfied and 10 means completely satisfied.” And for importance:
Chapter nine
5/7/07
09:56
Page 131
The questionnaire
“I would now like you to score the same list of factors for how important or unimportant they are to you, again using a 1 to 10 scale, where 1 means of no importance at all and 10 means extremely important.” 9.4.2 Scoring satisfaction and importance The customer requirements must be covered in two separate sections for satisfaction and importance. It is tempting, but incorrect to cover both importance and satisfaction for each requirement before moving onto the next item. Adopting this approach results in an artificial correlation between the importance and satisfaction scores for each requirement. Separate importance and satisfaction sections should therefore be used, but in what order? Although it is conventional to ask the importance section before satisfaction, our tests at The Leadership Factor show that it is better to start with satisfaction scores since this makes respondents familiar with all the issues before they are asked to score importance. When the importance section follows satisfaction a wider range of importance scores is given and this provides greater discriminatory power at the analysis stage. Scores given for satisfaction vary little whether they are asked before or after importance. Once the list of customer requirements has been scored for satisfaction, any low satisfaction scores can be probed. This is completely controllable with interviews, where the interviewer would return to the low scoring questions and ask the respondent to explain why they gave each of the scores. On web surveys, a mandatory pop-up comments box can appear after each low satisfaction score. This achieves the desired effect of obtaining a comment for each low score but may distort the data as some respondents will learn to avoid giving low scores to sidestep the tedium of the comments boxes. On paper questionnaires an open comments box invites customers to make comments, “particularly about any items you have scored low for satisfaction”, but typically only around one third or one quarter of respondents will write comments, and even then not for all of their low scoring requirements. After scoring and probing satisfaction, list all the requirements again and rate them for importance. Although some text books would state that the requirements should be listed in a random order and, strictly speaking, not in the same order on every questionnaire, on the grounds that earlier questions might influence respondents’ thinking on later ones, practical convention is to list the requirements in a logical order. This is supported by McGivern5 who states that illogical sequencing and strange nonsequiturs will damage rapport with respondents and will confuse them, leading to reduced commitment on their part and sometimes failure to complete the questionnaire / interview. The order will be the same for both the satisfaction and importance sections. In deciding which order the questions should be listed, there are two basic choices. One option is the sequence of events that customers typically go
131
Chapter nine
132
5/7/07
09:56
Page 132
The questionnaire
through when dealing with the company and that works very well for one off events like taking out a mortgage or making an insurance claim. However, for many organisations that have ongoing relationships with customers involving a variety of contacts for different things at different times, using the sequence of events as a basis for question order will not work. In that situation it would be normal to use topic groupings, with all the questions on quality grouped together, all the questions on delivery together etc. 9.4.3 Additional questions Asking a small number of ‘lens of the organisation’ questions is perfectly valid provided they come after the ‘lens of the customer’ requirements. This ensures that the satisfaction and importance scores that will be monitored over time are not influenced by any other factors asked earlier. If this rule is followed there is no restriction on the subject matter of the additional questions, which can cover anything else organisations would like to know. Nor does the type of question matter. They can be open or closed, and if the latter, can employ any kind of rating scale since they will be analysed completely separately from the satisfaction measurement part of the questionnaire. The number of additional questions that can be accommodated will be dictated by how much time remains after allowing for the satisfaction, importance, and classification questions. The time available is addressed in sections 9.5 and 9.6. For managing and improving customer satisfaction and loyalty, two types of additional question are most appropriate, loyalty questions and satisfaction improvement questions and these will be covered in the next two sub-sections. 9.4.4 Loyalty questions For companies wanting to link customers’ attitudes with their behaviour the additional questions will need to cover loyalty. This will make it possible to calculate the drivers of loyalty (see Chapter 10). As additional questions they don’t have to follow any specific format, but it is advisable to retain the 10-point scale, for the reasons explained in Chapter 8 and to facilitate any modelling work to establish the links between satisfaction and loyalty (See Chapter 14). It is also useful to ask several loyalty questions, mainly for the benefit of having a loyalty index6 (see Chapter 11), but also because it can be helpful to cover different dimensions of loyalty. As Johnson et al from Michigan University point out, satisfaction measurement methodology applies universally across businesses but loyalty measures do not1, because the desired behavioural outcomes of satisfaction differ considerably across sectors and sometimes, due to company strategy, across different businesses within the same sector. This illustrates the fallacy of universal measures such as a net promoter score based on a standard loyalty question. Typical dimensions of loyalty include retention, commitment, recommendation, related sales, trust, value and preference. It would be unusual to include loyalty questions covering every dimension,
Chapter nine
5/7/07
09:56
Page 133
The questionnaire
rather selecting the three or four most relevant to an organisation. We now outline the suggested forms of wording for loyalty questions across all seven dimensions. KEY POINT There is no standard loyalty question that is equally applicable for all organisations. The best approach is to ask several loyalty questions covering the dimensions of loyalty that are most relevant to the organisation concerned. (a) Retention “On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, do you expect to be a customer of XYZ in 12 months’ time?” (Adjust time scale if appropriate). This question is most relevant to companies in markets where there is a significant level of switching, but where before and after the switch, single sourcing would be normal. It is therefore appropriate for insurance companies, mortgage providers and most utility companies, especially those such as mobile telephony where one year contracts are common. It is clearly not suitable for markets where customers can’t switch, e.g. water utilities in the UK nor for those where switching is theoretically possible but unusual in practice. Many suppliers of computer systems or software are in this position as the cost and hassle involved in switching are prohibitively high. It is also unsuitable for promiscuous markets, such as most retail markets, where customers use many of the suppliers. Even if a customer is less loyal and has reduced her spend with a retailer, she will probably still be a customer in 3, 6 or 12 months time, albeit a less valuable one. (b) Commitment “If you could turn the clock back and start over again, would you choose XYZ as your (bank, internet service provider, waste disposal service, fleet management supplier etc)? Please answer on a scale of 1 to 10, where 1 means definitely not and 10 means definitely”. This question is particularly useful in markets where switching is possible but not very common. As well as the computing sector mentioned above, it is relevant for banking, any kind of subscription service such as satellite TV or a heating and plumbing call-out service and contractual arrangements in B2B markets, such as facilities management, security, cleaning, out-sourced payroll contracts etc. It is a very good loyalty indicator in difficult-to-switch markets because it highlights customers who wish they could switch even though they probably won’t. This can serve as a very useful warning because in captive markets customers will endure quite high levels of dissatisfaction but will usually reach a ‘cliff edge’ where the pain of remaining a customer outweighs the cost and hassle of switching. It can also be a better indicator of loyalty than the retention question in many markets for two
133
Chapter nine
134
5/7/07
09:56
Page 134
The questionnaire
reasons. First, it is a question about the present rather than the future. The customer may not yet have given much thought to contract renewal but will definitely know whether they have no regrets about the current contract and would re-sign today if necessary. Second, the retention question is more threatening because it’s a direct question about whether the customer is going to spend more money with the supplier in the future. If the survey is not anonymous and not conducted by an independent third party it will be seen by many as sales-motivated. The commitment question is a non-threatening question that is much more likely to elicit an answer that accurately captures the customer’s loyalty feelings. A good use of a commitment question is illustrated by the Consumers’ Association survey into the UK mobile phone market referred to in Chapter 18. Most contracts had stiff penalties for early termination so customers were asked if they would opt for a different network if they did not have to pay a penalty. On that measure, only 8% of Orange’s customers were uncommitted compared with the industry average of 27% and 32% for One2One – a very telling lead indicator for both companies. (c) Recommendation “On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, would you recommend XYZ to friends and family?” Recommendation is relevant to most organisations so will almost always be one of the questions that makes up a loyalty index. It is easy for customers to answer and is a good indicator of customers’ loyalty feelings. Where possible, however, it is very useful to gather information about real loyalty behaviours, instead of the attitudinal questions shown above or as well as them. It is easy to supplement the recommendation question with a behavioural one such as: “Have you recommended XYZ to anyone in the last 3 months (time scale as appropriate)? If yes, to how many people?” Response options will vary by business since the incidence of recommending behaviour is much greater in some sectors than others, but the purpose of the options would be to assess frequency of recommendation as well as to distinguish between recommenders and non-recommenders. If both types of recommendation question are asked it can also be insightful to understand whether, of those customers who are willing to recommend in theory, some actually do so in practice much more than others. If this phenomenon does apply, and provided the variance can be linked to customer segments, there will be opportunities to target the types of customer who recommend most to reward loyalty behaviours, and to encourage or incentivise segments that are willing to recommend but in practice tend not to.
Chapter nine
5/7/07
09:56
Page 135
The questionnaire
(d) Related sales On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, will you consider XYZ for (your next financial product / other related household services / servicing and spares….. as appropriate)?” The wording of the related sales question requires much more tailoring across different types of business. The wording shown is appropriate for companies such as banks with a wide range of related products. For many companies related sales are the biggest single element of customer lifetime value making this question a particularly important component of their loyalty index. (e) Trust On a scale of 1 to 10, where 1 means definitely not and 10 means definitely, do you trust XYZ to look after your best interests as a customer?” This is another good question for drawing out customers’ deep-seated loyalty feelings about an organisation and adds a new dimension that may not be captured by any of the previous questions. A supplier might be performing very well on all the practical aspects of meeting customers’ requirements, so customers intend to stay with it, would sign up again if turning the clock back and may even have bought other products and recommended. But is the organisation genuinely committed to its customers or, when the chips are down, more interested in delivering results to shareholders? Is it always managed ethically, or will it follow the most profitable route? The memory of any earlier scandals will linger long after their perpetrators have departed. Many people have a general feeling that companies, especially large ones, are much more interested in profits than anything else, including customers. Consequently, they often feel taken for granted and that their loyalty is not rewarded or valued. More than any of the other loyalty questions, the trust question will draw out any such feelings. (f) Value “On a scale of 1 to 10, where 1 means very poor and 10 means excellent, how would you rate the value for money provided by XYZ?” In any situation where customers pay for the product or service, its cost or price will almost inevitably have been one of the satisfaction questions as it is bound to be an important ‘lens of the customer’ requirement. Where price is a particularly prominent feature of a market the questionnaire will often benefit from more than one satisfaction question covering aspects such as its competitiveness and fairness, the way price negotiations are handled, the stability of prices, special offers / promotions, the simplicity or otherwise of tariff systems etc. For satisfaction measurement it is strongly recommended to have one or more specific, actionable
135
Chapter nine
136
5/7/07
09:56
Page 136
The questionnaire
questions on price, not a very vague question such as value for money, which is clearly a double question. If there is dissatisfaction should the company reduce the price or increase the value? It will also be double counting since the key value elements of the value for money equation will inevitably have been covered amongst the other requirements measured. However, using value as a dimension of loyalty is completely different. Its purpose is not to be actionable but to provide accurate insight into customers’ loyalty feelings and the feeling that an organisation does or does not provide good value will often form a significant element of customers’ future loyalty behaviour. In fact, questions designed to draw out customers’ deep seated feelings often work better if they are very general, so using ‘value for money’ or ‘good value’ will be exactly the right kind of wording for a loyalty question. Value is another loyalty question that is widely applicable across markets. Even in the public sector where customers do not pay directly for services such as health and education but do pay indirectly through taxes, the concept of providing ‘good value’ is widely understood. (g) Preference The preference question can also cover attitudes or behaviours as appropriate. In sectors where customers don’t use competing suppliers, it is appropriate to use a very general preference question such as: “On a scale of 1 to 10, where 1 means amongst the worst and 10 means amongst the best, how does XYZ compare with other organisations that you use?” This question can be focused where relevant, e.g. “compared with other Government departments.” However, preference questions are of most value to companies in competitive markets. They can be attitudinal, such as: “On a scale of 1 to 10, where 1 means the worst and 10 means the best, how does XYZ compare with other supermarkets that you use?” Preference questions can also elicit specific ‘share of wallet’ information such as: “In a normal week, what percentage of your grocery shopping is done at XYZ?” This type of ‘share of wallet’ question is particularly useful in markets where there is dual or multiple sourcing and often high levels of switching. In Chapter 11 we will explain how to compile data from the loyalty questions into an index and in Chapter 13 we will explore in more detail the predictive value of different loyalty questions. Now we need to turn our attention to the other line of additional questioning that is particularly useful in CSM.
Chapter nine
5/7/07
09:56
Page 137
The questionnaire
9.4.5 Satisfaction improvement questions On a conventional satisfaction measurement questionnaire the data will pinpoint the areas to address to improve customer satisfaction (see Chapter 11) and, if customers are interviewed, the probing of low satisfaction will provide considerable insight into the reasons for dissatisfaction and the changes that customers would like to see in those areas. However, to maximise the chances of improving customer satisfaction, it can be very useful to have additional, precise information that relates to the company’s internal organisation and processes. These are totally ‘lens of the organisation’ questions whose purpose is to enable the company to continually fine tune its satisfaction improvement programme by monitoring the effect of specific changes on customer satisfaction. The possibilities for such questions are almost endless, so in this section we will simply illustrate the concept with a two examples. A common area of poor performance for customer satisfaction is handling problems and complaints. Clearly, any additional questions in this area would be asked only to customers with recent experience of a problem or complaint. The focus for additional questions will be provided partly by customers’ comments about their dissatisfaction in that area and partly by management’s view of specific actions the organisation could take to improve matters. One possibility could be to reduce the time it takes to resolve a problem, in which case a suitable question would simply be: “How long did it take to resolve the problem?” Response options would include several quite detailed time frames relevant to the organisation concerned. As well as being able to verify that ‘time to resolution’ does affect customer satisfaction it will be possible to identify the tipping point beyond which satisfaction deteriorates. Tracking surveys will show when the organisation’s internal metrics on ‘time to resolution’ improvements have changed customers’ perceptions and, as we will demonstrate in Chapter 14, the impact of the actions in improving customer satisfaction and loyalty can also be quantified. A second example could be the way in which the problem was handled – by email, letter, telephone or personally. Dichotomous questions are often very helpful for satisfaction improvement initiatives because they are totally black and white. A policy is being implemented in the eyes of customers or it isn’t. In this example it would also be very useful to relate the questioning to one or more very specific points along the problem handling journey. Typical questions would be: “Did anyone from XYZ call you to understand the details of your complaint? Did you receive an acknowledgement in writing that your complaint had been logged? Did you receive a follow-up call after the resolution of your complaint?” As well as honing such questions by incorporating the time taken for the action, their
137
Chapter nine
138
5/7/07
09:56
Page 138
The questionnaire
actionability is often improved by relating them to the specific individuals or teams involved. Rules about double questions apply equally here, of course, so any additional digging must be done through separate questions such as: “How did you bring your problem to XYZ’s attention?” Response options would cover channels and individuals or teams. A question of this type often works well as a ‘closed question – open response’ where more than one response option is permissible. It will often demonstrate that customers with problems end up more satisfied if they have used a certain channel and/or a specific team or individual. Chapter 15 will provide more details on how to use this type of information to improve customer satisfaction. KEY POINT Dichotomous questions can be very helpful in providing actionable information to guide satisfaction improvement initiatives. 9.4.6 Classification questions Classification questions should come at the end. Some people may be offended by what they see as impertinent questions about age, gender, ethnic origin, occupation or income, so it is always better to leave classification questions until after they have answered the other questions9. If the classification questions are placed at the beginning of the questionnaire, respondents may abandon it or abort the interview. The one exception here would be quotas or qualification questions where respondents’ suitability has to be verified before their inclusion in the survey. It is good practice to be as consistent as possible with classification questions to aid comparability over time and with other organisations. Some companies will have an internal segmentation that is fundamental to their marketing strategy and therefore dictates the categories for their classification questions. If not, it could be sensible to adopt the standard versions for demographic questions used by the Office for National Statistics (ONS) and available on their website10. KEY POINT The correct sequence of questions for CSM is: 1. Satisfaction scores 2. Importance scores 3. Additional questions 4. Classification questions
9.5 Questionnaire length 9.5.1 Length of time Whether a self-completion questionnaire or one that will be administered by interview,
Chapter nine
5/7/07
09:56
Page 139
The questionnaire
10 minutes is a reasonable time to ask of customers. This duration should be clearly and honestly stated in the introductory letter. Moreover, organisations that follow the conventional good practice of surveying any individual no more than once a year, will be able to say that they are asking for no more than ten minutes per annum of the customer’s time to provide feedback on how well their requirements are being met – clearly a reasonable request. In markets where customers are more interested in the subject they will often take more time to make comments. This is common in B2B customer satisfaction surveys and will often increase the average interview length to 15 minutes. However, in any market, customers who are short of time and choose not to make extensive comments should be able to complete the survey within 10 minutes. 9.5.2 Number of questions: interviews Within that 10 minute window up to fifty questions can be accommodated on a CSM questionnaire. This may seem a surprisingly high number, but it is due to the repetitive nature of scoring the customer requirements for satisfaction and importance. Asking fifty unrelated survey questions using different question types and scales would take at least 20 minutes. For CSM, however, the bulk of the questionnaire involves scoring the customer requirements for satisfaction, then importance, all on the same scale. Customers soon get used to the scale, especially a 10-point numerical scale, so the first two sections of the questionnaire will be completed very quickly. It is normal to include up to 20 requirements that are scored for satisfaction and importance, giving 40 questions in total. These will be customers’ 20 most important requirements as identified by the exploratory research. If interviewed, customers will be probed on any low satisfaction scores they give. The number of questions to be probed will depend on how satisfied customers are and the threshold level set. An organisation achieving a reasonable level of satisfaction can expect to probe around four of the 20 attributes on average if the threshold for probing is all satisfaction scores below six out of ten. This would mean that 44 of the 50 questions have been used, leaving six to split between additional questions and classification data. This resultant distribution of the 50 questions across a typical interview is shown in Figure 9.1. FIGURE 9.1 Composition of an interview questionnaire Questionnaire Sections: Interview Section
Maximum questions
Introduction Satisfaction Scores Probing low satisfaction scores Importance scores Additional questions Classification questions
20 3 20 3 4
This number and distribution of questions is not fixed and will be influenced by the
139
Chapter nine
140
5/7/07
09:56
Page 140
The questionnaire
customer experience. This can be very brief with some organisations, such as calling a helpdesk with a straightforward technical query or booking a ticket through an agency. In these situations, there may not be as many as 20 important customer requirements to include on the questionnaire. This would enable the supplier to ask more additional questions or to have a shorter questionnaire. Organisations with a very complex customer-supplier relationship may face the opposite problem with more than 20 important customer requirements that seem to merit inclusion. In this situation it is advisable to resist the temptation to make the main survey questionnaire longer. Instead it may be possible to reduce the number of additional and/or classification questions to create space for a longer list of customer requirements. Alternatively it may be worth considering a larger sample for the exploratory research in order to be more precise about the exact make-up of customers’ 20 most important requirements. The only other variable on interview length will be the amount of probing. As customer satisfaction reduces, more probing will be needed. Rather than reduce the number of customer requirements to accommodate the extra probing, it is preferable to lower the probing threshold to scores below four at low levels of satisfaction. This will still generate enough qualitative information to fully understand the reasons behind customers’ dissatisfaction. 9.5.3 Number of questions: self-completion Following the 10-minute rule, 50 is also the guide for the maximum number of questions on a self-completion questionnaire. Due to the inability to probe on postal questionnaires, the distribution of questions across the sections will differ slightly from interviews. Instead of probing low satisfaction scores, it is normal to include a comments box on a paper questionnaire. This can be inserted at the end or straight after the satisfaction section. Since the most useful qualitative information for organisations is anything that helps them to better understand the reasons for any customer dissatisfaction, the following wording above the comments box is most useful. “Please include any additional comments in the box below. It would be very helpful if you could comment on any areas that you scored low for satisfaction.” Since a comments box is considered equivalent to one question, a self-completion questionnaire can accommodate slightly more additional or classification questions if required, resulting in the kind of composition shown in Figure 9.2. FIGURE 9.2 Composition of a self-completion Questionnaire Sections: Self-completion Section Maximum questions Introduction 20 Satisfaction Scores 1 Comments box 20 Importance scores 5 Additional questions 4 Classification questions
Chapter nine
5/7/07
09:56
Page 141
The questionnaire
With paper questionnaires fifty questions can be squeezed onto a double-sided A4 sheet or can be spaced out to cover four sides. Although shorter questionnaires are desirable ‘per se’, the four-sided questionnaire is likely to achieve a higher response rate and a better quality of response because it will look more attractive and will be easier to navigate, understand and fill in9. Some respondents may never start questionnaires that have small type or look cluttered as they will be seen as difficult to complete. KEY POINT Provided most of the questions involve scoring customers’ requirements for satisfaction and importance on a numerical scale, a maximum of 50 questions can be answered within the recommended 10 minutes.
9.6 Design guidelines This advice applies only to self-completion questionnaires, which need to look professional and aesthetically appealing. We have already suggested that questions should be spaced out, with an attractive layout even if it makes the questionnaire run into more pages. Use of colour is also worthwhile. Even a two colour questionnaire can appear much more attractive because semi-tones can be used very effectively for clarification and differentiation. By all means include the organisation’s logo and, if applicable that of an agency to highlight the survey’s independence. Where appropriate, some organisations can also include background images or photographs to add appeal or to emphasise any subject areas that will be of interest to customers. Companies in this position include leisure clubs and venues, holiday companies, membership organisations, charities and other special interest groups. It is also important to consider the design requirements of customers with poor reading eyesight, which will include most older customers. In this respect, the RNIB (Royal National Institute for the Blind) recommend 12-point type, with a sans-serif font such as Arial, no reversed-out text (e.g. white text on a dark background) and a very dark colour used for the text, preferably black. Since we know that almost all the questions will be closed, especially on selfcompletion questionnaires, and most of those will be scaled, it is necessary to consider precisely how customers will give their response. Using the 10-point scale as an example, customers could be asked to write a number to indicate their score, they could circle their score on a row of numbers from 1 to 10, or they could be presented with a row of boxes and asked to put a tick or a cross in the appropriate one. Although the first takes up the least space and may present an uncluttered appearance, it is not recommended since hand written numbers greatly increase the risk of error, with huge variations in people’s handwriting styles leading to confusion
141
Chapter nine
142
5/7/07
09:56
Page 142
The questionnaire
between 1s and 7s, or 3s and 8s. Whilst ‘ticking the box’ is frequently referred to, almost as though it is the generic option, it is actually the least precise of the three remaining options, with ticks of varying sizes often covering large parts of the paper outside the box concerned. Circling numbers is easy for respondents and minimises errors, but it is much less suitable for closed questions with verbal options, such as classification questions, where very large and messy ovals will often be required. Since it would be bad practice to mix response options (e.g. circling numbers but ticking boxes for verbal categories), the only error proof method that is applicable to all types of closed question is placing a cross in the appropriate box. This method is also by far the best option if questionnaires are scanned. For electronic questionnaires, the handwriting problem is obviously eliminated, but people generally prefer using the mouse than the keyboard, so typing a number would not be recommended. Circling numbers is not feasible so the options are clicking on a box, which insets an ‘x’ or clicking on a number, or other response option in a drop-down menu. Either option is acceptable although the former method is slightly quicker.
9.7 Questionnaire wording There are many potential pitfalls to avoid in wording CSM questionnaires, since any one of them could reduce the reliability of the customer satisfaction measure. Figure 9.3 summarises the main ones. FIGURE 9.3 Questionnaire wording Wording Checklist 1. Does the respondent have the knowledge? Qualify respondents before including them in the survey Offer a not-applicable option 2. Will the respondent understand the question? Ambiguity of common words Unfamiliar or jargon words Double question 3. Will the questions bias the response? Balanced question Balanced rating scale
9.7.1 Knowledgeable answers The first thing to consider is whether respondents will possess the knowledge to provide accurate answers to the questions on the questionnaire. Not having it won’t stop them! People will often express opinions based on scant knowledge of the facts.
Chapter nine
5/7/07
09:56
Page 143
The questionnaire
For example, customers might score a supermarket on ‘quality of products’, ‘level of service’ or ‘value for money’, even though it is months or even years since they shopped there. That would not be a problem if the supermarket wanted to understand the general public’s perception of its quality, service or prices, but it would be very misleading if it was trying to understand the real experiences of its customers. A related problem is that respondents may not have experience of an organisation’s performance on all the requirements covered. In a B2B market, a chief executive, for example, may not have any real knowledge of a supplier’s on time delivery performance. To avoid gathering misleading scores from ill informed members of the DMU, a ‘not applicable’ option should be provided for each satisfaction question. It is not necessary to provide a ‘not applicable’ option for importance scores since respondents will have a view on the relative importance of each requirement including those with which they are not personally involved. KEY POINT Always include a not applicable option when scoring the requirements for satisfaction. 9.7.2 Ambiguous questions The second thing to consider is whether the respondents will understand the questions, or, more accurately, whether they will all assign to the questions the same meaning as the author of the questionnaire. For example, many of the words we use routinely in everyday speech are problematical when used in questionnaires because they are simply not sufficiently precise. A pertinent example is shown in Figure 9.4. FIGURE 9.4 Ambiguous question Which of the following newspapers do you read regularly? Please tick the box next to any newspapers that you read regularly: Express
Mirror
Guardian
Sun
Times
What exactly does the word ‘regularly’ mean? Questionnaire wording has to be extremely precise, to the point of being pedantic. If anything is open to interpretation the results will often be unclear when the survey is analysed. Figure 9.5 shows how the question about the newspapers would have to be phrased.
143
Chapter nine
144
5/7/07
09:56
Page 144
The questionnaire
FIGURE 9.5 Precise question How often do you read each of the following newspapers? Please tick one box for each newspaper Every day
More than once a week
Weekly
Monthly
Every 3 months
Less than once every 3 months
Never
Express Guardian Mail Mirror Sun Times
9.7.3 Jargon Another reason why respondents misunderstand questions is the use of unfamiliar words. Everybody knows it is not advisable to use jargon but most people still underestimate the extent to which words they use all the time at work with colleagues can be jargon words to customers. Of course, that is another very good reason for carrying out the exploratory research so that the customers’ terminology can be used on the questionnaire. As well as obviously technical names, even words such as facility and amenity are liable to ambiguity and misinterpretation. The Plain English Society (see Appendix 2) provides good advice on wording that is clear and understandable for most people. 9.7.4 Double questions Double questions are a common reason for misunderstanding. A typical example is: “Was the customer service advisor friendly and helpful?” If the customer thought she was very friendly but not helpful the question would be unanswerable. Nor is it actionable. If friendliness and helpfulness are both important to customers, it is necessary to ask two questions. 9.7.5 Biased questions One of the biggest problems on the wording of questionnaires is the danger that the questionnaire itself will bias the response through unbalanced questions or rating scales11. Typical questions on a customer satisfaction survey might be: “How satisfied are you with the layout of the store?”
Chapter nine
5/7/07
09:56
Page 145
The questionnaire
“How satisfied are you with the speed of response for on-site technical support?” Each of those questions has introduced an element of bias which is likely to skew the results, and the problem arises in the first part of the question: “How satisfied are you with……?” The question itself is suggesting that customers are satisfied. It is just a matter of how satisfied. To eliminate that bias and be certain that the survey is providing a measure that accurately reflects how satisfied or dissatisfied customers feel, those questions should be worded as follows: “How satisfied or dissatisfied are you with the layout of the store?” “How satisfied or dissatisfied are you with the speed of response for on-site technical support?” 9.7.6 Biased rating scales The other part of the question that might bias the response is the rating scale. Biased rating scales are commonly found on many customer satisfaction questionnaires, as shown in Figure 9.6. FIGURE 9.6 A positively biased rating scale Please comment on the quality of service you received by ticking one box on each line: Excellent
Good
Average
Poor
Helpfulness of staff Friendliness of staff Cleanliness of the restaurant Cleanliness of the toilets Waiting time for your table
For an accurate measure, customers must be given as many chances to be dissatisfied as to be satisfied. The scale shown is not balanced and is likely to bias the result towards satisfaction. Most positively biased rating scales on customer satisfaction questionnaires are probably there because the questionnaire designers are oblivious of the problem. However, some companies who are very experienced in CSM deliberately use positively biased questionnaires on the grounds that only ‘top box’ satisfaction matters, so it is only degrees of satisfaction that are worth measuring. There are two problems with this philosophy. Firstly, even if most customers are somewhere in the very satisfied zone, it is still essential information to understand
145
Chapter nine
146
5/7/07
09:56
Page 146
The questionnaire
just how dissatisfied the least satisfied customers are and the extent to which individual attributes are causing the problem. In many ways it is more valuable to the organisation to identify in detail the problem areas that it can fix than to have detailed information on how satisfied its most satisfied customers are. The second argument against using positively biased rating scales is that it is not necessary. With a sufficient number of points on the scale one can accommodate degrees of satisfaction and dissatisfaction in equal proportions. As we saw in the previous chapter a 10-point scale allows five options for degrees of satisfaction whilst still offering the same number of choices for customers who are less than satisfied. KEY POINT Balanced questions and rating scales will offer customers equal opportunities to be dissatisfied or to be satisfied, so will not bias the outcome. 9.7.7 Requirement wording If the wording of the questionnaire is not to influence customers’ answers, the list of customer requirements that are scored for satisfaction and importance must be neutrally worded. Examples of wording that break this rule would be: “How satisfied or dissatisfied were you with………. Quick service at the checkout An efficient check-in procedure A warm atmosphere in the restaurant.” Pejorative statements like those above are more likely to depress rather than increase satisfaction scores because they are effectively asking the customer to rate the supplier against high standards. They will seriously inflate importance scores since they lead customers to focus on the adjectives. Of course it is important that the service is speedy rather than slow and that check-in is efficient as opposed to inefficient. For an accurate measure of satisfaction, it is essential that the wording of the attributes does not put any thoughts into respondents’ heads other than labelling, in a neutral fashion, each customer requirement to be scored. The requirements listed above should therefore be worded: “How satisfied or dissatisfied were you with………. The speed of service at the checkout The check-in procedure The atmosphere in the restaurant.” As we pointed out in Chapter 8, this is also a problem when Likert (agree – disagree)
Chapter nine
5/7/07
09:56
Page 147
The questionnaire
scales are used. Due to organisationsâ&#x20AC;&#x2122; reluctance to use the scales in the right way with as many strongly negative as strongly positive statements, satisfaction surveys comprising a list of 20 positive statements suffer from a high degree of acquiescence bias. KEY POINT The list of customer requirements should be worded in a neutral manner.
9.8 Closing the questionnaire Whether for interviews or self-completion, there are several things to consider between the final question and the end of the questionnaire. After the last question it is courteous to thank respondents for their time and help, then give them a final opportunity to make any other points they want to make about anything. On a self-completion questionnaire this purpose will be served by the comments box, which is normally placed at the end of the questionnaire. We have already covered the importance of anonymity, but said that respondents can be given the choice to forego their anonymity and be attributed. If it is offered, this option should always be given at the end of the questionnaire. For selfcompletion questionnaires it is useful to prominently remind customers about the return date, even though it should already have been specified in the introductory letter and at the beginning of the questionnaire. As well as reminding them that there is a reply paid envelope to return it in, it is also a good idea to state the return address in case the reply envelope has been mislaid. Finally, it is good practice for agencies to offer the telephone number of the Market Research Society and/or the commissioning company for respondents to use if they wish to check the authenticity of the agency or to make any complaint about the interview or questionnaire12.
9.9 Piloting It is normal practice to pilot questionnaires and most textbooks will make reference to this. However, as we have previously stated, CSM is not like most market research, and many of the distinctive aspects of CSM reduce the need for piloting. Firstly, unlike most research, CSM is preceded by an extensive exploratory research phase whose sole purpose is to ensure that the questionnaire asks the right questions. The core of the questionnaire will be the 15 â&#x20AC;&#x201C; 20 customer requirements that are scored for importance and satisfaction. These are determined by the customers during the exploratory research phase. The wording of the requirements will also be predetermined, and will be based on words used by customers during the exploratory research rather than any terminology used by the organisation. There is not much room for additional questions, and of those, there is a limited number of tried and tested options for any loyalty questions and the wording of classification questions is often standard. For subsequent tracking studies it will be important not to tamper with the wording of the original questionnaire to ensure comparability. Even the
147
Chapter nine
148
5/7/07
09:56
Page 148
The questionnaire
more peripheral aspects of the survey such as the introductory letter and the introduction and close of the questionnaire have been tried and tested so many times by experienced CSM practitioners that there isnâ&#x20AC;&#x2122;t really anything left to pilot. Unless the piloting was even more extensive than the exploratory research, how could a small scale pilot be justification for changing anything determined by the exploratory research? Other aspects of the methodology such as scoring importance and satisfaction on a 10-point scale are so fundamental and essential that no purpose would be served by piloting. Consequently, it is not necessary to pilot a CSM questionnaire that adheres strictly to the methodology explained in this book. Questionnaire piloting would be necessary if the methodology has not been followed. If exploratory research has not been conducted, some of the questions may be irrelevant to customers. Others may be misleading or even incomprehensible. The questionnaire might be too long. If exploratory research has been done and the methodology has been scrupulously followed, the only remaining unknown is the type of people that make up the sample. This is most applicable to telephone interviews, since some people, e.g. senior management, can be very difficult to reach. In this situation it is not so much the questionnaire being piloted as the difficulty of achieving the required number of interviews. A similar example is the organisation having a poor database. Again, it is not the questionnaire but the feasibility of achieving a sufficiently large and representative sample that needs to be tested. The main reason for questionnaire piloting will be if many of the customers in the sample may struggle to fully understand it, typically through old age, language difficulties or low educational attainment. This would apply to any questionnaire, but especially if a self-completion survey is envisaged. Where there are major understanding difficulties, self-completion will often be impossible, with only very carefully guided face-to-face interviews standing any chance of a reliable response.
Conclusions 1. Most of the content, wording and sequencing of a CSM questionnaire are predetermined by the exploratory research and by non-negotiable aspects of the CSM methodology, such as scoring customersâ&#x20AC;&#x2122; most important requirements for satisfaction and importance on a 10-point scale. 2. Satisfaction should be scored first, then importance, with any additional questions asked next and classification questions last. 3. Since scoring 15 to 20 customer requirements for both importance and satisfaction on a 10-point scale is very quick, a CSM questionnaire can accommodate up to 50 questions and still be administered, by interview or selfcompletion in around 10 minutes.
Chapter nine
5/7/07
09:56
Page 149
The questionnaire
4. The most common use of additional questions is to measure loyalty and / or to ask specific ‘lens of the organisation’ questions that will help to hone satisfaction improvement initiatives. 5. When wording questionnaires it is important to avoid injecting bias, to avoid any questions with vague or double meanings and to ensure that satisfaction scores are given only by respondents with recent experience of the supplier’s performance.
References 1. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California 2. Parasuraman, Berry and Zeithaml (1985) "A conceptual model of service quality and its implications for future research”, Journal of Marketing 49(4) 3. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for measuring perceptions of service quality”, Journal of Retailing 64(1) 4. Zeithaml, Berry and Parasuraman (1990) "Delivering Quality Service”, Free Press, New York 5. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, Prentice Hall / Financial Times, London 6. Oppenheim, A N (1992) "Questionnaire Design, Interviewing and Attitude Measurement”, Pinter Publishers, London 7. Converse and Presser (1988) "Survey Questions”, Sage, London 8. Which? Online (1996) "Mobile Phone” Consumers’ Association in Barwise and Meehan (2004) "Simply Better: Winning and keeping customers by delivering what matters most”, Harvard Business School Press, Boston 9. Sudman and Bradburn (1983) "Asking Questions”, Jossey-Bass, San Francisco 10. Office for Mational Statistics website, www.statistics.gov.uk 11. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing Environment”, Richard D Irwin Inc, Burr Ridge, Illinois 12. Market Research Sociey website, www.mrs.org.uk
149
Chapter ten
150
5/7/07
09:57
Page 150
Basic analysis
CHAPTER TEN
Basic analysis Having designed a questionnaire and undertaken a survey, the data collected will have to be analysed. This chapter will focus on analysing the core information collected by a customer satisfaction survey – measures of importance and satisfaction, before moving on in Chapter 11 to use those measures to calculate a trackable customer satisfaction index. In Chapter 12 we will explain how to extract actionable outcomes from the large volume of data that will often be generated by a customer satisfaction survey.
At a glance In this chapter we will a) Describe different types of average. b) Examine different ways of understanding what’s important to customers. c) Show how to use importance and impact measures to distinguish givens from differentiators. d) Explain how the standard deviation is used to measure the variance of views expressed by customers. e) Consider how to identify and understand the dissatisfaction drivers. f) Explain how to identify loyalty differentiators. g) Describe the analysis of verbal scales. h) Review analytical software for research data.
10.1 Averages There are three measures of the average of a set of numbers – the mean, the median and the mode. 10.1.1 The mean Usually, when people use the generic term ‘average’, they are referring to the mean. This is the sum of the values divided by the number of values: For example: 6, 14, 1, 3, 11, 4, 5, 9. The total is 53. There are 8 values, so 53/8 = 6.625, which is the mean average of the 8 scores.
Chapter ten
5/7/07
09:57
Page 151
Basic analysis
10.1.2 The mode The mode is the most commonly occurring value in a string of values. Due to central tendency (the fact that in the real world, most values are close to the average rather than in the extremes of the distribution), the mode will often be a good approximation of the average. For example, if we checked the shoe sizes of a random sample of adult males we might produce the following data: 10, 9, 6, 9, 8, 12, 9, 7, 8, 11. The mode is 9 and is probably a good indication of the most common shoe size amongst adult males and the mean shoe size, which in this example would be 8.9. However, there are two problems with the mode. For some types of data the mode may not be at all reflective of what most people would see as the average. If we checked out the rainfall data at a holiday resort we might see the following (in millimetres). 0, 0, 5, 0, 24, 13, 3, 0, 0, 0, 0, 0, 4, 7. The mode is clearly 0, but most people would see the mean of 4mm as a better reflection of the ‘average’ rainfall at that time of year. Of course, it might also be useful to know that there were 8 days without rain and only 6 days when it did rain, but that is simply a count of the values and nothing to do with the average. The other big problem with the mode for CSM is that if the raw data is in whole numbers, the mode will always be a whole number, making it far too insensitive to reflect the gradual changes in customer satisfaction that typically occur. 10.1.3 The median The median is the middle value in a string of numbers. Sorted in descending order, the median of the following string of 11 values is 7. 10, 9, 8, 8, 7, 7, 6, 6, 4, 3, 3. If there is an even number of values, as shown in the following example, the median is the mid-point between the two middle values, 6.5 in this case. 9, 8, 8, 7, 7, 6, 6, 4, 3, 3. The median can be a very useful measure of the average in situations where the range of data is very wide and the sample small. The example below shows, in ascending order, the value of 7 houses in a very small village. £220,000, £235,000, £260,000, £265,000, £272,000, £310,000, £895,000 The mean is £351,000 but has been heavily influenced by the one very high and untypical value. In this example, the median of £265,000 would be a better reflection of the average house value in the village. This problem will not occur with CSM data on a 10-point scale with a reasonable sample size. Like the mode, the median also suffers from lack of sensitivity as far as CSM data is concerned. Hence the use of the mean for calculating average importance and satisfaction scores. KEY POINT The mean average is used for CSM data.
151
Chapter ten
152
5/7/07
09:57
Page 152
Basic analysis
10.2 Understanding what’s important to customers In Chapter 4 we examined the difference between stated and derived measures of importance and concluded that so-called ‘derived importance’ measures are actually measures of impact rather than measures of what’s important to customers. Therefore, any debates about whether stated or derived importance is the better measure are irrelevant. Neither is better or worse since they are measures of different things – importance and impact. Organisations that want a full understanding of how customers judge them will use both measures. 10.2.1 Importance A CSM questionnaire asks customers to score the importance of a list of customer requirements on a 10 point scale, where 1 means ‘of no importance at all’ and 10 means ‘extremely important’. Based on a sample size of at least 200 respondents, the mean importance scores generated by this exercise will provide a very clear and reliable view of the relative importance of customers’ priorities, as seen by the customers themselves. In this chapter we will use some fictitious data from a retailer to illustrate the outcomes of a customer satisfaction survey. For simplicity, the charts show only eight customer requirements. As stated earlier in this book, a typical survey would measure the top 15 to 20 customer requirements (as determined by exploratory research with customers). In the retail example shown, all eight requirements are important but choice of products, staff expertise and prices are the customers’ top priorities. These are significantly more important than store layout, staff helpfulness and staff appearance. FIGURE 10.1 Importance 6.5 Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
7
7.5
8
8.5
9
9.5
10
Chapter ten
5/7/07
09:57
Page 153
Basic analysis
10.2.2 Impact As we know from Chapter 4, so-called ‘derived importance’ is actually a measure of impact, essentially highlighting things that are ‘top of mind’ for customers. Technically it is a measure of the extent to which a particular factor is currently influencing customers’ judgement of an organisation. We also saw in Chapter 4 that due to the high degree of collinearity in customer satisfaction data, a bivariate FIGURE 10.2 Impact scores 0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
correlation provides a better reflection of relative impact than multiple regression. The correlation coefficient will be a value between 0 and 1 and a typical range for CSM data is shown in Figure 10.2. It shows how some requirements, staff helpfulness in this example, can make a big impact on customers’ overall judgement of a supplier even though customers don’t score them particularly highly for importance. Conversely there can be requirements that are almost always scored highly for stated importance, price being a typical example, that sometimes make little difference to customers’ overall judgement of the supplier.
10.3 Using importance and impact measures To gain a full understanding of what’s important to customers, importance and impact should be combined in the type of matrix shown in Figure 10.3. The importance scores are plotted on the y axis and the impact scores on the x axis, with the range of scores determining the axis scale in both cases. The key area is the top right hand box, containing requirements with the highest scores for importance and impact. These are the Satisfaction Drivers – the requirements that will have the most influence on customer satisfaction. In the example shown, the retailer should
153
Chapter ten
09:57
Page 154
Basic analysis
obviously focus very strongly on expertise of staff and speed of service. Since in most CSM surveys the top 15-20 customer requirements will be covered, there will often be a greater number of Satisfaction Drivers than the two shown in the example here.
High
FIGURE 10.3 Satisfaction drivers HYGIENE FACTORS
SATISFACTION DRIVERS
Choice of products Expertise of staff
Stated Importance
Price level
Speed of service HIDDEN OPPORTUNITIES
MARGINALS
Quality of products Staff helpfulness
Layout of store Staff appearance Low
154
5/7/07
Low
Derived Importance
High
KEY POINT Requirements that score highly for importance and impact are Satisfaction Drivers and will make a big difference to customer satisfaction. The top left hand box contains the Givens – requirements that customers say are very important but make relatively low impact on their judgement of the supplier. Provided performance is maintained at an acceptable level, Givens would not normally be areas for investment. However, it is absolutely essential to maintain an acceptable level of performance since customers will punish suppliers very heavily if their expectations on Givens are not met – empty shelves in the supermarket or dirty tableware in a restaurant for example. KEY POINT High importance and low impact implies Givens – requirements that will not make much impact on customers provided an adequate level of performance is maintained. The bottom right hand box shows the Hidden Opportunities. These are requirements that customers don’t rate as highly important, yet strongly influence customers’ judgement of the supplier. It’s not uncommon to find staff helpfulness in this cell
Chapter ten
5/7/07
09:57
Page 155
Basic analysis
since a particularly good or a poor experience with a member of staff will be remembered by customers for a long time and will probably stimulate considerable word of mouth – positive or negative. Hidden Opportunities will often provide a good return on investment for suppliers since organisations can never give customers too many good experiences in areas that have high impact. The requirements in the bottom left cell score relatively low for both importance and impact. However, it would be misleading to see these factors as unimportant requirements that can be more or less ignored since, if exploratory research was conducted, all the requirements measured by the survey will be important to customers – it’s just a matter of degree. It’s best to view these requirements as second division givens. They don’t usually need much investment, but expected levels of reasonable performance must be maintained.
10.4: Understanding customer satisfaction As well as scoring the list of requirements for importance, customers also score the same list for satisfaction. Figure 10.4 shows the average satisfaction scores for the retailer. They are still listed in order of importance to the customer, a practice that should be consistently followed on all charts. FIGURE 10.4 Satisfaction 6.5
7
7.5
8
8.5
9
9.5
10
Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
Average satisfaction scores above nine on a ten point scale show an extremely high level of customer satisfaction. Scores of eight equate to ‘satisfied’ customers, seven to ‘quite satisfied’ and six (which is only just above the mid point of 5.5) to ‘borderline’ or ‘much room for improvement’. Increasingly, successful companies are striving for ‘top box’ scores, (9s and 10s on a 10-point scale), as work by Harvard and others has demonstrated that much stronger levels of loyalty are found amongst highly satisfied
155
Chapter ten
09:57
Page 156
Basic analysis
customers than amongst merely satisfied ones. Average satisfaction scores of five or lower are below the mid point and suggest a considerable number of dissatisfied customers. It would be good practice in telephone surveys to probe any scores below 6 out of 10 to find out why the low score was given. This will enable the research to explain any poor satisfaction scores such as speed of service in Figure 10.4. Self-completion questionnaires can ask for comments for low satisfaction scores, but not all customers will give them, and they will typically not provide as much insight as comments generated by probing low scores in interviews.
10.5 The range of views 10.5.1 Range and variance Technically, the range is the difference between the highest and the lowest value in a set of data. Thus, if the highest satisfaction score is 10 and the lowest is 1, the range is 9. However, for CSM that isn’t very useful because the fact that the range is wide doesn’t tell us anything about the extent of consensus or variance in the views expressed by customers. The histograms shown in Figures 10.5 to 10.7 illustrate the point. (A histogram shows how many people have scored each point on the scale.) In all three cases, 20 people have taken part in a survey, scoring their level of satisfaction on a 10 point scale. In all three cases, the average score comes out at 5.5, but each paints a completely different picture of the supplier’s success in satisfying its customers. In Figure 10.5, there is a strong consensus of opinion with all 20 respondents giving very close scores of either 5 or 6 out of 10. In other words, the service is neither good nor bad, it is mediocre, and all customers think the same way. FIGURE 10.5 Histogram 1 10 9 Number of respondents
156
5/7/07
8 7 6 5 4 3 2 1 0
1
2
3
4
5
6
Satisfaction score
7
8
9
10
Chapter ten
5/7/07
09:57
Page 157
Basic analysis
FIGURE 10.6 Histogram 2 10
Number of respondents
9 8 7 6 5 4 3 2 1 0
1
2
3
4
5
6
7
8
9
10
Satisfaction score
In Figure 10.6 the 20 people surveyed are divided into two equal groups that hold diametrically opposed views. Half think the service is excellent – as good as it could be. The other ten customers have a very low opinion of the service – rating it as poor as it could be. This paints a very different picture to the one shown in Histogram 1, but the mean satisfaction score is still 5.5. Finally, Histogram 3 shows a different picture again, with customers’ views equally spread across the full spectrum of opinion. Once more the average satisfaction score is 5.5. FIGURE 10.7 Histogram 3 10
Number of respondents
9 8 7 6 5 4 3 2 1 0
1
2
3
4
5
6
7
8
9
10
Satisfaction score
With an average score potentially disguising such widely differing realities, it is clearly necessary to understand what lies behind the average importance and satisfaction
157
Chapter ten
158
5/7/07
09:57
Page 158
Basic analysis
scores. One way would be to have a histogram for each importance and satisfaction score, but with 20 customer requirements on a typical CSM survey that would necessitate 40 histograms. Moreover, since the real world does not produce such extreme differences in the spread of scores as those shown in Figures 10.5 to 10.7 it would be very difficult to distinguish the differences between the histograms. Much more useful would be a measure that clearly identifies the extent to which the customers surveyed agree with each other or hold widely differing views. That measure is called the standard deviation, which effectively shows the average distance that all the scores are away from the overall average. 10.5.2 The standard deviation In order to describe a variable in the sample properly we need to use a measure of central tendency (usually the mean) and a measure of the amount of variance in the scores. The mean score is a measure of the centre of the sample. In order to know whether the mean is a good representation of the underlying sample scores we need to know whether most people fall relatively close to the mean, or whether they are widely distributed. The most useful measure of variance with interval data is the standard deviation. The standard deviation is a measure of the average distance between each score and the mean score. The obvious way to work this out would be to calculate the straightforward average distance between each score and the mean, by adding up the distances and dividing by the number of cases. Unfortunately this would always equal zero, as the distances above and below the mean would cancel each other out. Known as the average deviation, its formula is shown below, where X is each variable and n is the number of variables in the sample. average deviation = (Sum(X-Mean)) / n One solution to this problem is to square the distance between each score and the mean before dividing by the number of cases. Squaring the negative distances will remove the negative sign and produce the average squared distance between each score and the mean, known as the variance. As you can see, the only difference between the two formulae is the squaring of the distance between each variable and the overall mean. variance = (Sum(X-Mean)2) / n The problem with the variance as a measure of dispersion is that the numbers are magnified due to the squaring, so the result is difficult to relate back to the original scale and consequently hard to interpret. The easy solution is to calculate the square root of the variance. This is the standard deviation and its formula is shown below. sample standard deviation = â&#x2C6;&#x161;(Sum(X-Mean)2) / n-1 In the final formula, the sample size has suddenly become n-1. So for a sample of 200,
Chapter ten
5/7/07
09:57
Page 159
Basic analysis
199 would be used in the formula. Subtracting 1 from the sample size forces the standard deviation to be larger and this simply reflects the good scientific principle that if there is a risk that the estimate of the population (as opposed to the sample) standard deviation might be wrong, one should err on the side of caution. In practice, with even quite small samples such as 50, this procedure makes virtually no difference to the standard deviations typically recorded for CSM data on a 10-point scale. Using the ‘STDEV’ formula, it is easy to calculate in Excel or other analysis software. The standard deviations produced by CSM data typically fall in the kind of range shown in Figure 10.8. On a 10 point scale, a standard deviation of around 1 indicates that there is a strong consensus of opinion. For example, customers are very satisfied with choice of products and most customers feel that way. On the other hand a standard deviation above 2 demonstrates a wide disparity of views, and we can see this with staff helpfulness. The way to use the standard deviation is to ignore it if it is below 2 but to investigate matters further in cases where it exceeds 2. FIGURE 10.8 Standard deviations
Choice of products Expertise of staff Price Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
Satisfaction score 9.2 7.9 8.8 7.4 7.7 8.6 7.5 8.5
Standard deviation 0.91 1.53 1.39 0.92 1.34 1.49 2.73 1.06
Based on the standard deviation for staff helpfulness, quite a lot of respondents must have scored 8s, 9s and 10s for satisfaction whereas others must have scored it very low. Even if only 10 to 15% of customers were very dissatisfied with something, an organisation needs to understand the problem in order to address it. There are two ways of doing this. Firstly, if respondents have been probed for any areas of dissatisfaction the comments will indicate which aspects of staff helpfulness customers don’t like. Perhaps for the retailer the comments will suggest that the main cause of customer dissatisfaction is not the response of staff when asked to help but the difficulty of finding one to ask. A second option is to examine the classification data of all the very dissatisfied respondents (those scoring 1 to 3) to see if there are any patterns. Perhaps for the retailer it might show that they are primarily older customers, over 70 years of age. Another piece can be added to the jigsaw by studying the importance scores too. In this example there might be a high standard deviation for importance, with elderly customers placing much more importance on staff helpfulness than most customers. Since satisfaction is a relative concept (based on the extent to which the supplier meets the customer’s requirements), the drilling down
159
Chapter ten
160
5/7/07
09:57
Page 160
Basic analysis
would indicate that a consistent level of staff helpfulness, which met the requirements of most customers, was not sufficient to meet the needs of the elderly, who expected much more help. This is a very useful finding since it would enable the retailer to develop some focused actions on staff helpfulness targeted on elderly customers. This would be much better than basing decisions on the average score and taking the inappropriate step of exhorting all staff to be more helpful across the board when they are already doing all that can be reasonably expected to help customers. KEY POINT Use standard deviations to identify pockets of dissatisfied customers and comments to plan targeted actions.
10.6 Dissatisfaction drivers In addition to monitoring levels of customer satisfaction, it is very useful to understand whether any aspects of the service are strongly irritating customers, or a segment of customers. These are the dissatisfaction drivers. They can be identified by examining the percentage of customers that have given very low scores for each requirement. The threshold for low scores should be determined by the organisationâ&#x20AC;&#x2122;s level of success in satisfying customers. A company with good levels of customer satisfaction (an index of 80% or higher), will benefit from highlighting any areas of dissatisfaction, so the threshold would be scores in the bottom half of the scale â&#x20AC;&#x201C; 1 to 5 on a 10-point scale. For a less successful company with an index of 70% or below, it would be more appropriate to focus on areas of severe dissatisfaction, identified by scores of 1 to 3 on a 10-point scale. Dissatisfaction drivers can be highlighted by a chart like the one shown in Figure 10.9. FIGURE 10.9 Dissatisfaction drivers 0% Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
10%
20%
30%
40%
50%
60%
Chapter ten
5/7/07
09:57
Page 161
Basic analysis
If customers are interviewed, all satisfaction scores below the threshold would be probed. This will ensure that as well as identifying what customers are dissatisfied with, the organisation will also understand why they are dissatisfied. The insight gained from customer comments will be extremely useful when deciding how to improve customer satisfaction. Figure 10.10 illustrates this point for the retailer. If the company was basing decisions on the scores alone, it may be tempted to encourage its staff to be more friendly, polite and/or helpful with customers and perhaps to provide training in customer contact skills or product knowledge. The insight gained from the comments demonstrates that these actions would not be cost-effective in improving customer satisfaction with staff helpfulness since it is staff availability that is clearly the problem rather than their response when a customer does eventually find someone to ask. FIGURE 10.10 Problems with staff helpfulness 0%
10%
20%
30%
Couldn’t find anyone to help Staff over-worked Staff in too much hurry Not interested in solving my problem Didn’t have knowledge to help Offhand / rude
10.7 Loyalty differentiators As well as knowing what makes customers satisfied or dissatisfied many companies will also want to understand what makes them loyal or disloyal. To do this it is necessary to ask one or more loyalty questions, possibly combining them into a loyalty index (see Chapter 11 for details). The loyalty data should then be used to divide respondents into three loyalty groups: ·Loyal – scoring 8-10 on the loyalty question(s) ·Ambivalent - scoring 4-7 on the loyalty question(s) ·Not loyal - scoring 1-3 on the loyalty question(s) To highlight the differences between the most loyal and least loyal customers it is more productive to discard the middle group and contrast the satisfaction scores given by the loyal respondents and the disloyal ones. The resultant chart, Figure 10.11, shows that some requirements, such as ‘staff appearance’ and ‘store layout’ make virtually no difference to customer loyalty – the least loyal customers scoring them almost as highly as the most loyal. By contrast,‘quality and choice of product’ as well as ‘staff helpfulness’
161
Chapter ten
162
5/7/07
09:57
Page 162
Basic analysis
highlight why the retailerâ&#x20AC;&#x2122;s most loyal customers like the company so much more than its least loyal ones, making them the main loyalty differentiators. FIGURE 10.11 Loyalty differentiators 5
5.5
6
6.5
7
7.5
8
8.5
9
9.5
10
Choice of products Expertise of staff Price level Speed of service Quality of products
Most loyal Least loyal
Layout of store Staff helpfulness Staff appearance
10.8 Analysing verbal scales As explained in the previous chapter, verbal scales are not recommended for customer satisfaction research, but if they are used, the results have to be analysed using a frequency distribution â&#x20AC;&#x201C; in other words, a count of how many people said what. Using a retail example with different questions, a frequency distribution is shown in Figure 10.12. The numbers are usually percentages, so in the example shown, 14% are completely satisfied with the location and 78.9% are quite satisfied. It is a totally accurate summary of the results, but it does not make a very strong impression. FIGURE 10.12 Frequency distribution Completely satisfied
Quite satisfied
Quite dissatisfied
Completely dissatisfied
Location
14.0%
78.9%
7.0%
0.0%
Range of merchandise
9.1%
56.4%
29.1%
5.5%
Price level
17.3%
55.8%
26.9%
0.0%
Quality of merchandise
16.4%
63.6%
20.0%
0.0%
Checkout time
4.2%
70.8%
22.9%
2.1%
Staff helpfulness
5.6%
64.8%
25.9%
3.7%
Parking
33.3%
54.4%
10.5%
1.8%
Staff appearance
28.3%
64.2%
7.5%
0.0%
Chapter ten
5/7/07
09:57
Page 163
Basic analysis
It is possible to chart a frequency distribution showing varying levels of satisfaction or importance by attribute. This is shown in Figure 10.13, and is certainly easier to assimilate than the table of numbers. FIGURE 10.13 Charting verbal scales 0%
10%
20%
30%
40%
Location 7.0
70%
80%
90%
56.4
26.9
Staff helpfulness 3.7
17.3
63.6
22.9
Checkout time
9.1
55.8
20.0
Staff appearance 7.5 Completely Dissatisfied
16.4 70.8
25.9
10.5
100%
14.0
29.1
Price level
Parking
60%
78.9
Range of merchandise 5.5
Quality of merchandise
50%
4.2
64.8 54.4
33.3
64.2 Quite Dissatisfied
5.6
28.3 Quite Satisfied
Completely Satisfied
However, the real problem with the analysis produced from verbal scales is the absence of a single average score for each attribute. For example, it is not possible to make a direct comparison between the importance score for location and the satisfaction score for location, making it impossible to carry out a gap analysis to determine priorities for improvement (see Chapter 12). Nor can a weighted customer satisfaction index be calculated. Whilst it may be very tempting to change the categorical data produced by verbal scales into numbers for analysis purposes, this is statistically invalid. As we explained in Chapter 8, verbal scales produce nonparametric data which lack interval properties so cannot be analysed like numbers or changed into them. KEY POINT Compared with numerical data, output from verbal scales is far less useful for CSM but it is statistically invalid to change verbal categories into numbers to improve ease of analysis and clarity of reporting.
10.9 Clarity of reporting If the results of a CSM survey are not clear and easy to understand, they will not be assimilated into the organisationâ&#x20AC;&#x2122;s thinking and will not lead to effective action to improve customer satisfaction. This provides an additional reason for using a
163
Chapter ten
164
5/7/07
09:57
Page 164
Basic analysis
numerical scale, the simple average scores giving a much clearer outcome than the plethora of information presented by a frequency distribution. Consistency is also very helpful, hence the consistent listing of the requirements in order of importance to the customer on all charts and tables. Use of colour will also help the clarity and consistency of the message, particularly if based on a simple and universally understood colour coding such as traffic lights. Even a frequency distribution will provide a much clearer picture if red and amber, representing danger, are used for low satisfaction scores and shades of green for higher ones. Whenever data is presented it is helpful to explain in simple terms how it was produced, otherwise it may not be believed and its apparent lack of transparency could be used by detractors to cast doubt on the credibility of the CSM process and outcomes. Statistically derived measures of impact are often poorly understood so their calculation and meaning must be clearly explained using the kind of illustrations presented in Chapter 4 (Figures 4.2 and 4.3). Sometimes a concept such as the standard deviation can be avoided altogether in reporting CSM results to colleagues by presenting the information in a less technical manner. Instead of stating that the satisfaction score for a requirement such as ‘staff helpfulness’ has a high standard deviation, it can be more useful to use the dissatisfaction drivers and say ‘x% of customers are very dissatisfied with helpfulness of staff ’. Certainly, the basis of any kind of composite measure, such as a customer satisfaction index will have to be explained if it is to have any credibility, and this will be the topic of the next chapter. 10.10 Software All the statistical procedures mentioned in this book can be conducted using Microsoft Excel. Since most people have Excel and are competent in its use, it is unlikely that the financial cost or training cost of adopting specialist software will be justifiable unless large quantities of research data are being handled. If specialist software is required there are two broad types available. The first will typically provide a general entry level solution for many research tasks including questionnaire design, data entry and web surveys as well as data analysis. Most are easy to use but fairly limited in the level of statistical analysis provided. An example of this type of software is SNAP1. For those requiring a much more sophisticated level of statistical analysis, a specialist statistical package such as SPSS will be necessary2. This type of software would be much more difficult to learn and would be worthwhile only for those with a high level of statistical knowledge. Since specialist software can be very difficult for the layman to evaluate, help is available, if required, from independent market research software consultants Meaning Ltd3.
Conclusions 1. Importance scores are based on what customers say is important and provide the only measure of the relative importance of customers’ requirements.
Chapter ten
5/7/07
09:57
Page 165
Basic analysis
2. Statistically derived measures of impact are different and reflect the extent to which a requirement is influencing customers’ judgement of an organisation. 3. For a full understanding of customer satisfaction, organisations should monitor importance and impact and combine the two sets of measures to distinguish between Givens and Satisfaction Drivers. 4. To have satisfied customers organisations must perform sufficiently well on the Givens. 5. To achieve very high levels of customer satisfaction, strong performance on the Satisfaction Drivers will also be essential. 6. Satisfaction scores of 8 equate to satisfied customers but ‘top box’ scores (9s and 10s) are required to generate loyalty. 7. Always probe low satisfaction scores to fully understand the reasons for any customer dissatisfaction. 8. Use standard deviations to identify pockets of dissatisfied customers and comments to plan targeted remedial action. 9. Highlight loyalty differentiators by contrasting the satisfaction scores given by the most loyal customers with those given by the least loyal. 10. Output from verbal scales is analytically far less useful than from numerical scales but it is not statistically valid to assign numerical values to categorical or ordinal data.
References 1. www.snapsurveys.com 2. www.spss.co.uk 3. www.meaning.uk.com
165
Chapter eleven
166
5/7/07
09:58
Page 166
Monitoring performance over time
CHAPTER ELEVEN
Monitoring performance over time Most organisations require a headline measure that reflects their performance in satisfying customers. Quite rightly so, since it serves some very useful purposes. It enables senior management to have a top line figure that demonstrates how the organisation is performing. It is essential for companies using a balanced scorecard since customer satisfaction is usually one of its main components. It can be used for setting targets and for judging the organisationâ&#x20AC;&#x2122;s success in achieving them. It is vital for benchmarking whether internally across business units, stores, regions etc. or against external organisations. Although headline measures are very useful and widely used, thereâ&#x20AC;&#x2122;s still much confusion and misunderstanding over how they should be produced. The three most commonly used techniques are a simple overall satisfaction question, a composite index based on a number of components of satisfaction or a weighted index based on the relative importance of its component elements. There are also other important outcomes of customer management such as loyalty. Shouldnâ&#x20AC;&#x2122;t that be monitored as well as, or instead of, customer satisfaction?
At a glance In this chapter we will examine: a) The problems inherent in a percentage satisfied measure b) The benefits of an index c) Weighted and unweighted indices d) How to calculate a weighted customer satisfaction index e) The reliability of indices f) Constructing a loyalty index g) Monitoring loyalty behaviour
11.1 Overall satisfaction The simplest way to get a measure of overall satisfaction is to ask the question:
Chapter eleven
5/7/07
09:58
Page 167
Monitoring performance over time
“Taking everything into account how satisfied or dissatisfied are you overall with XYZ?” The rating scale attached to this question could be verbal or numerical. If numerical, the headline measure would normally be an average score, if verbal it would typically be a percentage satisfied measure based on aggregating the respondents ticking boxes in the top half of the scale (the top two boxes on a typical 5-point verbal scale). For all the reasons explained in Chapter 8, an overall satisfaction question with a verbal rating scale is by far the least useful headline measure. To re-cap, the key reasons are: a) Since most organisations now have customers who are broadly satisfied rather than dissatisfied overall, this measure encompasses most customers, resulting in a very high score that often leads to a dangerous level of complacency within the organisation. b) With only two scale points covering the entire satisfied zone (where most customers are), the 5-point scale is not sufficiently sensitive to detect the small changes in customer satisfaction that typically occur. c) Moreover, the percentage satisfied measure fails to reflect most of the changes in satisfaction that it does detect because its aggregation of data doesn’t show changes across the two scale points in the satisfied zone, nor the three scale points below that level. d) The financial benefits of customer satisfaction (continued loyalty and high customer lifetime value) occur mainly at high levels of satisfaction. Therefore, using a percentage satisfied measure for target setting and tracking means monitoring a measure that is not tough enough to produce any worthwhile benefits. The percentage satisfied measure fatally perpetuates the confusion between ‘making more customers satisfied’ and ‘making customers more satisfied’.
11.2 The benefits of an index 11.2.1 Random measurement error Even if it is based on a 10-point scale, the single question measure is statistically by far the worst option due to a phenomenon that is variously labelled random, observation or measurement error. It was Galileo as long ago as 1632 who first propounded that measurement errors are symmetrical1 (i.e. equally prone to underor over-estimation). This enabled eighteenth century scientists such as Thomas Simpson to demonstrate the advantage of using the mean compared with a single observation in astronomy2 – the instances of over and under-estimation effectively cancelling each other out. As we explained in Chapter 6, measurement errors are now classified as ‘systematic’ and ‘random’, and it is the random measurement error that is minimised by using an index. This is illustrated in Figure 11.1, where the mid-point is the true satisfaction score that would have been obtained if there were no such
167
Chapter eleven
168
5/7/07
09:58
Page 168
Monitoring performance over time
thing as random measurement error. Regardless of whether the score for each item was good or bad, the chart demonstrates that some of the requirements will have scored rather higher than they should have done, whilst others, due to random, inexplicable factors, will have scored somewhat lower. The net effect is that the overand under-scoring is more or less cancelled out when a composite index is used. FIGURE 11.1 Random measurement error Attribute 15 Attribute 14 Attribute 13 Attribute 12 Attribute 11 Attribute 10 Attribute 9 Attribute 8 Attribute 7 Attribute 6 Attribute 5 Attribute 4 Attribute 3 Attribute 2 Attribute 1
-0.5
-0.3
-0.1
0.1
0.5
0.5
As Oppenheim points out3, it has been demonstrated many times that attitude questions are more prone to random error than factual ones. This is because attitude measurement is a combination of a personâ&#x20AC;&#x2122;s stable underlying attitude (e.g. they think the organisation is pretty good or quite poor) plus a collection of momentary, unstable determinants such as questionnaire wording, context, recent experiences, mood of the moment etc. As shown in Figure 11.1, Oppenheim points out that these distorting effects will apply more to some questions than others and will distort the answers in different ways, but the underlying attitude will be common across the questions and will be what remains in an index once the random distortions have largely cancelled each other out. 11.2.2 Moving the needle As well as the fact that it is measuring attitudes, the random error problem is accentuated in CSM because customer satisfaction changes only slowly, especially in an upwards direction. Since any survey based on sampling will have a confidence interval (margin of error) based primarily on the size of the sample, the random measurement error of the single overall satisfaction question will compound the confidence interval to produce a headline measure that is far too volatile to be useful for monitoring
Chapter eleven
5/7/07
09:58
Page 169
Monitoring performance over time
customer satisfaction. The more frequently companies track customer satisfaction, the more serious this business problem becomes, since staff will eventually decide that there is no relationship between movements in the measure and the real performance delivered by the organisation. They consequently draw the conclusion that trying to measure and improve customer satisfaction is pointless, whereas, in reality, the problem is the instability of the measure they are tracking. Myers4 calls this problem “moving the needle”, and agrees with us that it has posed a significant problem for many American corporations. He advocates maximising the sensitivity of the scale by increasing the number of points and by strengthening the high and low end-point anchor statements, as well as minimising confidence intervals by using the largest affordable sample sizes. The unsuitability of a single overall satisfaction question as the trackable measure has also been widely supported elsewhere in the CSM literature5,6,7. KEY POINT The headline measure of customer satisfaction must be a composite index rather than a single item overall satisfaction question since the latter is much more prone to random measurement error. 11.2.3 The customer’s judgement An important characteristic of a headline measure is that it should reflect as closely as possible the way customers make their satisfaction judgements. The use of a composite index conforms with current understanding about how customers make satisfaction judgements – based on multiple aspects of the customer experience rather than one overall impression8,9,10,11. As we have said earlier in this book, this phenomenon has been labelled ‘the lens of the customer’, and exploratory research is used to capture these key aspects of the customer experience, which then form the basis of the CSM survey. This leads us to another fundamental question of methodology. Should all the customer requirements be treated equally by the index or should they be weighted so that some contribute more to the index than others?
11.3 Weighting the index 11.3.1 Weighted or non-weighted The most straight forward customer satisfaction index would be a non-weighted average of all the satisfaction scores. The appeal of this approach is its simplicity, making it easy to calculate and easy for staff to understand. The former benefit is minimal since calculating a weighted index presents no problem. Even the most computing-intensive method of a customer by customer index is well within the capabilities of a standard spreadsheet. When it comes to staff, however, transparency,
169
Chapter eleven
170
5/7/07
09:58
Page 170
Monitoring performance over time
simplicity and ease of communication are very helpful. Apart from the organisational resources consumed in explaining a complicated calculation, employees tend to be suspicious of ‘black box’ figures invented by management. American bank card issuer, MBNA, has a very effective customer satisfaction-based bonus scheme based on a non-weighted index across a number of customer requirements12. A sample of customers is interviewed every day and the previous day’s score is posted for all employees to see as they arrive at work. Every day that the index is above target the company contributes money to a bonus fund, which is paid out quarterly. The index is clearly understood and universally accepted by staff, who stand a renewed chance of earning bonus every day when they arrive at work. Weighting the index, however, is widely advocated in the CSM literature on the grounds that the relative importance of customers’ requirements will differ across sectors and from one individual to another and that people place more emphasis on the things that are most important to them when making satisfaction judgements9,13,14. The original SERVQUAL questionnaire was revised in 1991 to incorporate the scoring of the five dimensions for importance so that they could be weighted during the analysis15. If the ‘lens of the customer’ principle is accepted, it is impossible to argue against weighting the index, since the most important methodological requirement of a headline measure is that it should reflect the customers’ judgements as accurately as possible. Capturing the customers’ true underlying attitudes is particularly important if the index is to be used for modelling the extent to which customer satisfaction affects various desirable outcomes such as customer loyalty or the company’s financial performance16. Moreover, an unweighted index has, in reality, assigned relative weightings to its components – equal ones. By implication, an unweighted index is saying that customers place equal emphasis on all of its component parts when judging the organisation. All things considered, the arguments point strongly towards a weighted index. If so, the remaining question is how it should be weighted. 11.3.2 Weighting options There are three methods of weighting a customer satisfaction index. The weighting factors can be based on management judgements, statistically derived measures of impact or relative importance. (a) Judgemental weighting factors There are several reasons why companies may choose to adopt judgemental weighting factors. First, management may believe that they know what’s important to customers and can therefore base the weighting factors on their own judgements. This view clearly contradicts the fundamental premise of this book, that if satisfaction is based on meeting or exceeding the customer’s requirements, any measure of satisfaction can be
Chapter eleven
5/7/07
09:58
Page 171
Monitoring performance over time
accurately produced only from the customer’s perspective. A second and more valid reason for using judgemental weighting factors would be to align with an organisational strategy that emphasises certain values, such as friendliness, integrity, environmental concern etc. Using this method, the most important organisational values would be weighted more heavily in the index. This type of approach would be justifiably adopted for many aspects of management, such as incorporating a ‘living the values’ component into employees’ appraisals, but is moving into a different type of customer research. It is rarely possible in research to ‘kill two birds with one stone’. An image or perception survey to understand the extent to which the organisation is associated with its values in the outside world may be a very useful exercise, but it is not the same as measuring customer satisfaction. Myers4 points out that organisations adopting judgemental weighting factors often regret their decision at a later date as unproductive debates about the weighting factors consume management time and thoughts. This is avoided by adopting empirically justifiable weighting factors such as those explained in (b) and (c). (b) Statistically derived weighting factors The CSM literature is divided between the use of weighting factors based on statistically derived measures of impact and the relative importance of the requirements to the customers. Some argue that statistically derived importance rather than stated importance measures should be used16. However, as we saw in Chapter 4, so-called derived importance measures are not really measures of how important requirements are to the customer but rather indicators of the amount of impact made by each requirement on an outcome variable such as overall satisfaction or loyalty16,17. Sometimes, statistically derived measures, produced by a variety of statistical techniques are called ‘relative importance’ but are actually measures of relative impact. The fact that different statistical techniques are advocated for producing impact measures argues against their use in a headline index of customer satisfaction since it would lead to a further debate about which particular statistical technique should be used. Moreover, practitioner experience shows that statistically derived impact coefficients are much less stable than stated importance scores, a big disadvantage for an index that must be trackable over time. In reality, any mathematical derivation of ‘relative importance’ is something quite different from asking the customers to score factors for importance18. It is therefore better to use both stated and derived importance measures for a fully rounded analysis of customer satisfaction data and for developing action plans, but to use stated importance measures to produce the weighting factors for a trackable customer satisfaction index. (c) Relative importance weighting factors As we have seen earlier in this book, importance scores are generated by asking
171
Chapter eleven
172
5/7/07
09:58
Page 172
Monitoring performance over time
customers. In addition to scoring importance on a 10-point scale as explained earlier, there are more complex methods of generating stated importance scores such as paired comparisons and points share. Whilst these methods have some appeal, based mainly on the fact that their forced trade-off approach tends to generate a wider range of importance scores, they are considered less appropriate to CSM than to other forms of market research due to the large number of variables typically involved in customer satisfaction research4. This means that the stated importance scores generated by the main survey should be used for the weighting factors6,19, since these most accurately reflect the actual importance of the requirements to the customer20 and will aid tracking by maximising stability and comparability. Note that the importance scores from the main survey rather than the exploratory research should be used since the larger sample size gives them greater reliability. The only exception would be if a quantitative exploratory survey (as explained in Chapter 5) has been conducted, in which case the statistical reliability would be perfectly adequate. KEY POINT The customer satisfaction index should be weighted according to the relative importance of customers’ requirements.
11.4 Calculating a customer satisfaction index The most accurate customer satisfaction index will be produced by calculating an individual index for each respondent prior to averaging all the individual indices. Whilst average importance scores from across the whole sample and average satisfaction scores can be used, the resultant index will be less accurate for two reasons. First, the relative importance of the requirements will vary between individual customers so using respondents’ own importance scores to calculate their weighting factors will be more accurate. The second reason concerns ‘notapplicables’, which will have an increasingly distorting effect on the index as their volume grows. We will use a question on complaint handling as an illustration. A company with high levels of customer satisfaction will typically have to handle complaints from only a small percentage of its customers. Consequently, if the question appears in a CSM survey, most respondents will score it ‘not-applicable’ for satisfaction. Since complaint handling is an area where organisations are notoriously poor at meeting customers’ requirements, the minority of respondents that do score it will probably generate quite a poor average satisfaction score. If average scores are used to calculate the index this low complaint handling score will be unfairly applied to all the respondents. The individual satisfaction indices for the majority of respondents, who had not scored complaint handling, would be higher since their indices would contain no data for complaint handling. This is also intuitively sound since we are measuring customers’ feelings of satisfaction or dissatisfaction with the
Chapter eleven
5/7/07
09:58
Page 173
Monitoring performance over time
customer experience. Clearly, it would be wrong to include in the measure parts of the customer journey that they have not experienced. 11.4.1 Calculating the weighting factors To demonstrate the calculation of a customer satisfaction index, we will use the hypothetical supermarket example with only eight requirements. The first column in Figure 11.2 shows the importance scores given by one respondent. To calculate the weighting factors simply total all the importance scores. In this example they add up to 60. Then express each one as a percentage of the total. Using ‘staff appearance’ as an example, 3 divided by 60, multiplied by 100 produces a weighting factor of 5%. Taking ‘speed of service’, 10 divided by 60, multiplied by 100 equals 16.66%, so due to its much greater relative importance for this customer, ‘speed of service’ will affect her index more than three times as heavily as ‘staff appearance’. FIGURE 11.2 Calculating the weighting factors For one respondent Customer requirement Choice of products Expertise of staff Price Speed of service Quality of products Layout of store Staff helpfulness Staff appearance TOTAL
Importance score 7 9 8 10 8 6 9 3 60
Weighting factor 11.66% 15.00% 13.33% 16.66% 13.33% 10.00% 15.00% 5.00%
11.4.2 Calculating the Satisfaction Index The second step is to multiply each satisfaction score by its corresponding weighting factor. The first column of data in Figure 11.3 shows the satisfaction scores for our one respondent and the second column of data shows her weighting factors that were calculated in Figure 11.2. Taking ‘staff appearance’ as the example, the satisfaction score of 9 multiplied by the weighting factor of 5% produces a weighted score of 0.45. The overall weighted average is determined by adding up all the weighted scores. In this example they add up to 7.41, so the weighted average satisfaction score for our one respondent is 7.41 out of 10. It is normal to express the index as a score out of 100 so in this example, the respondent’s Satisfaction Index is 74.1%. Note that this second step is based solely on the satisfaction scores for the list of customer requirements generated by the exploratory research. Scores for overall satisfaction,
173
Chapter eleven
174
5/7/07
09:58
Page 174
Monitoring performance over time
loyalty questions or any other additional questions should not be included. FIGURE 11.3 Calculating the Satisfaction Index Customer requirement Choice of products Expertise of staff Price Speed of service Quality of products Layout of store Staff helpfulness Staff appearance Weighted average
Satisfaction score 8 10 7 9 6 7 4 9
Satisfaction Index for one respondent
Weighting factor
Weighted score
11.66% 15.00% 13.33% 16.66% 13.33% 10.00% 15.00% 5.00%
0.93 1.50 0.93 1.50 0.80 0.70 0.60 0.45 7.41
74.1%
That procedure would now be repeated for all the other respondents and all the individual indices averaged to produce the overall customer satisfaction index for the organisation. On first reading this may seem to be a daunting task for a large sample. However, even basic computing skills would enable the formulae generated for the first respondent to be quickly transferred across the rest.
11.5 Updating the Satisfaction Index It is important that the Satisfaction Index is updateable. It has to provide a comparable measure of satisfaction that is trackable in the years ahead even if the questions on the questionnaire have to change as customers’ requirements change. Basically, the Satisfaction Index answers this question: “How successful is the organisation at satisfying its customers according to the 20 things that are most important to them?” (Assuming 20 customer requirements on the questionnaire.) If the questionnaire has to change in the future because customers’ priorities have changed, the Satisfaction Index remains a measure of exactly the same thing. “How successful is the organisation at satisfying its customers according to the 20 things that are most important to them?” That comparability also applies to organisations with different customer groups who
Chapter eleven
5/7/07
09:58
Page 175
Monitoring performance over time
need to be asked different questions in the same year. Provided the exploratory research has been correctly undertaken, the Satisfaction Indices from two or more surveys asking different questions are directly comparable. Theyâ&#x20AC;&#x2122;re both a measure of the extent to which each organisation met its customersâ&#x20AC;&#x2122; requirements.
11.6 The reliability of an index Survey results are often accompanied by a measure of reliability. An opinion poll, a CSM survey or an estimate of male height could be reliable to +/- 1%. This is its margin of error. If you measured a random and representative sample of adult males in the UK and recorded an average height of 5 feet 10 inches, with a margin of error of +/- 1%, the true mean height of UK adult males could be anywhere between 5 feet 9.3 inches and 5 feet 10.7 inches. Provided the sample is random and representative, the margin of error in its result will be caused by random error. Even if the sample was completely representative of all the demographic groups, it may have included an unusually small set of young males, smaller than average older men, Scottish men who were less tall than the average Scot etc. It would have been unlucky, but at random, with no explanation, it could happen. To understand the reliability of a sample for a CSM survey, the following factors must be considered. a) Sample size b) Confidence interval c) Confidence level d) Sub-groups e) Standard deviation 11.6.1 Sample size The reliability of a sample is based on its absolute size and not its proportion to the total population, for two reasons. First, the bigger the sample, the less impact extreme data will make on the overall result. A 7 foot 6 inches tall man could skew the average height of a sample of 10 by fully 2 inches, but only by 0.2 inches on a sample of 100 (well within our +/- 1% margin of error) and only by 0.02 inches on a sample of 1,000. Secondly, the mean is what it is because most people are average or close to it, so the larger the random sample the greater the likelihood that most will be close to the average and few will be in the extremes. For these reasons we said that a sample of 200 should be seen as the minimum for a reliable result at the overall level. Bigger samples will be more reliable, but there will be increasingly diminishing returns in reliability as the sample size grows beyond 200. 11.6.2 Confidence interval The confidence interval is the margin of error. If a Satisfaction Index is 78.6% with a confidence interval of +/- 1%, the lowest it could be if every customer were surveyed is 77.6% and the highest is 79.6%. The confidence interval is basically the precision
175
Chapter eleven
176
5/7/07
09:58
Page 176
Monitoring performance over time
of the result. The only controllable method for improving the precision of a Satisfaction Index, lowering its confidence interval, is to increase the sample size. The term ‘confidence interval’ is somewhat unfortunate since it is often confused with a different concept known as the ‘confidence level’. 11.6.3 Confidence level The confidence level sets a reliability level for the confidence interval or margin of error. The normal confidence level used in business research is 95%, but this is discretionary. It is possible to calculate a margin of error at any confidence level. If an index is 78.6% with a confidence interval of +/-1%, at the 95% confidence level, it means that if the survey were repeated many times the index would be between 77.6% and 79.6% at least 95% of the time, or 19 out of every 20 surveys. It is possible, like medical researchers, to set a more demanding level of reliability by using a 99% confidence level, which would increase the margin of error for any given sample size. Alternatively, by choosing to operate at the 90% confidence level it is possible to make an index look more accurate, at least to the uninitiated. However, it is strongly recommended that for CSM, organisations should work with the normal, 95% confidence level. 11.6.4 Sub-groups A sample of 200 for a typical CSM survey will, on average, produce a confidence interval of +/- 1.5% at the 95% confidence level, but by the time it is divided into subgroups, the precision at segment level will be much lower. Most organisations would accept a less accurate result at segment level, the general consensus being that subgroups should contain no fewer than 50 respondents, giving a confidence interval of approximately +/- 5%. As the sample gets smaller, confidence intervals will become wider. Figures 11.4 and 11.5 show real data for a chain of ten restaurants with an overall sample of 500 and 50 customers per restaurant. The confidence interval at the overall level is +/- 0.8%. At the sub-group level it varies from +/- 2.0% in Glasgow to +/- 2.8% in Hampstead. The level of precision required is a policy decision for the company, but often the size of the overall sample will be determined by the number of sub-groups and the margin of error that is acceptable at sub-group level. If the restaurant chain decided that the +/- 2.8% achieved in Hampstead was not sufficiently accurate it could opt for samples of 100 for greater reliability at restaurant level, which, on this survey, with lower than normal standard deviations, would have achieved confidence intervals below +/- 2%. At the overall level, however, the bigger sample of 1,000 would show relatively little gain in accuracy over the original sample of 500.
Chapter eleven
5/7/07
09:58
Page 177
Monitoring performance over time
FIGURE 11.4 Confidence intervals at sub-group level Index Overall Individual restaurants Glasgow York Hampstead Wimbledon Manchester Leeds Oxford Cheltenham Cambridge Lincoln
82.5% 86.1% 84.4% 83.8% 83.5% 83.2% 81.7% 81.4% 80.9% 80.8% 78.8%
Confidence Sample interval size +/-0.8% 500 +/-2.0% +/-2.7% +/-2.8% +/-2.2% +/-2.6% +/-2.6% +/-2.2% +/-2.3% +/-2.4% +/-2.7%
50 50 50 50 50 50 50 50 50 50
FIGURE 11.5 Sub-group confidence intervals illustrated 65.0%
70.0%
75.0%
80.0%
85.0%
90.0%
Overall Cambridge Cardiff Wimbledon Hampstead Bath Durham Norwich London Manchester York
11.6.5 Standard deviation As we said in Chapter 6, if all British adult males were identical in height, you would have to measure only one of them to know, with absolute certainty the average height of men in the UK, even if there were 20 million of them. The bigger the variation in menâ&#x20AC;&#x2122;s height, the more you would have to measure to be reasonably certain of your answer. The chief measure of variance for numerical data is the standard deviation, whose formula we explained in Chapter 10. As we said in Chapter 6, a higher standard deviation needs a bigger sample to achieve a given margin of error, other things being equal. Whilst typical CSM standard deviations are lower than in most other forms of market research, they can vary considerably from one survey to
177
Chapter eleven
178
5/7/07
09:58
Page 178
Monitoring performance over time
another, simply reflecting the fact that people agree more about some things than others. The survey shown in Figure 11.4 had particularly low standard deviations, resulting in a high level of accuracy (+/- 0.8%) for the sample of 500. By contrast, the data shown in Figure 11.6 has a wider margin of error with a confidence interval of +/- 1.3% at the overall level even though the sample contained 100 more customers. It can also be seen that the data for the service teams is less reliable than the indices for the individual restaurants shown in Figure 11.4 even though most of the teams have larger sample sizes than the restaurants. FIGURE 11.6 Sub-group confidence intervals: example 2 Index Overall Individual restaurants Area 1 Area 2 Service teams Team 9 Team 3 Team 1 Team 7 Team 6 Team 2 Team 10 Team 5 Team 8 Team 4
73.6%
Confidence Sample interval size ±1.3% 600
74.0% 73.3%
±1.9% ±1.8%
300 300
76.8% 76.8% 75.9% 75.7% 74.1% 73.8% 73.1% 72.1% 69.8% 68.7%
±4.2% ±4.4% ±3.7% ±5.0% ±4.6% ±3.3% ±4.3% ±4.5% ±4.7% ±4.2%
60 57 66 57 47 89 50 45 52 77
11.6.6 Calculating the margin of error Calculating the margin of error, or confidence interval, is based on the variables explained in this chapter (sample size, confidence level and standard deviation) and its Excel formula is CONFIDENCE. This is simply illustrated with the following example. Imagine a sample of 200 customers produced a mean score of 8.26 for ‘expertise of the financial advisor’ with a standard deviation of 1.2. The margin of error is 8.26 +/CONFIDENCE. To perform the calculation Excel asks for the sample size (200), the standard deviation (1.2) and the ‘alpha’. Alpha is the significance level used to compute the confidence level and its formula is 100 x (1 – alpha)%, so a confidence level of 95% gives and alpha of 0.05. The complete formula is: CONFIDENCE(alpha,standard_deviation,sample_size), or CONFIDENCE(0.05, 1.2, 200).
Chapter eleven
5/7/07
09:58
Page 179
Monitoring performance over time
In our ‘expertise of financial advisor’ example, the formula produces an answer of +/0.166, which we can round to 0.17. The confidence interval is therefore: 8.26 +/- 0.17 = 8.09 to 8.43. This means that 95% of the time, customer satisfaction with expertise of staff across the entire customer base would not be lower than 8.09 or higher than 8.43. However, there is a 5% (1 in 20) risk that the score for a census of customers could be outside of that range. At the 95% confidence level, the same calculation could be made for a customer satisfaction index of 77.43% with a standard deviation of 12, giving confidence interval of +/- 1.66. The range for the index would therefore be 75.57% to 78.89%. 11.6.7 Typical confidence intervals for CSM Based on vast amount of CSM data, The Leadership Factor has calculated that the average standard deviation for a customer satisfaction index is 11. Using the formula explained above produces the range of confidence intervals shown in Figure 11.7 for various sample sizes. FIGURE 11.7 Confidence intervals for CSM data Sample size 100 200 500 1000 5000
Precision Guide +/- 2.16% +/- 1.52% +/- 0.92% +/- 0.68% +/- 0.30%
11.7 Constructing a loyalty index In Chapter 9 we examined the kind of loyalty questions that can be asked in a customer survey and we said that companies typically use the three or four that are most relevant to them and combine the data into a loyalty index. As we explained in Chapter 1, some organisations have recommendation as the only loyalty question and use the information to produce a ‘net promoter score’21. Of course, the reasons for monitoring a loyalty index rather than a single loyalty question are the same as those explained for having a satisfaction index. The main difference between the satisfaction index and the loyalty index is that the latter will usually not be weighted. Since customer satisfaction is about the extent to which the organisation met its customers’ requirements, the first part of the equation (customers’ requirements and their relative importance) has to be included in the measure as well as the second part of the equation (customers’ satisfaction with the
179
Chapter eleven
180
5/7/07
09:58
Page 180
Monitoring performance over time
organisation’s performance). Loyalty is different. First of all it is a behaviour rather than an attitude. The loyalty questions are ‘lens of the organisation’ questions designed to reflect as closely as possible the kind of loyalty behaviours that the organisation would like to see. A loyalty index would therefore normally be calculated as the simple mean score of the loyalty questions. If a loyalty index is weighted judgemental weighting factors would typically be used. Customer generated weighting factors are not relevant since one cannot say that recommendation is more important to customers than related sales or commitment. Some aspects of loyalty may be more important than others to the organisation, in which case judgemental weighting factors could be considered. For example, an insurance company may take the view that retention (as measured by intention to renew) is the most important aspect of loyalty, followed by value for money, related sales and recommendation. If so it might weight the four aspects of loyalty 40%, 30%, 20%, and 10% respectively. Or perhaps it might decide on 50%, 25%, 15% and 10%. Ideally there would be some facts, such as a detailed customer lifetime value calculation to provide an empirically justifiable basis for the weighting factors, otherwise, management consensus will have to be used. Moving on a further step in terms of technical difficulty, statistically derived weighting factors could be used if appropriate data exists. As well as truly accurate customer lifetime value data, the company would have to be able to relate the loyalty behaviours to a specific business outcome such as sales or profit. With adequate data (which rarely exists), advanced statistical techniques such as partial least squares or structural equation modelling could be used to calculate the relationship between the financial outcome and the various components of loyalty. If available, they would provide the best basis for weighting a loyalty index. Once the weighting factors are adopted, the calculation of the weighted loyalty index would proceed according to the method outlined in Figure 11.3.
11.8 Monitoring loyalty behaviour There is a very strong case for using the customer satisfaction index as the main attitudinal measure and lead indicator of the organisation’s performance in delivering results to customers with real customer behaviours used to provide the loyalty measures – albeit lagging ones. Of the loyalty questions detailed in Chapter 9, the most useful areas to monitor are Harvard’s 3Rs of loyalty – retention, related sales and referrals. 11.8.1 Retention The best measure of retention for most organisations is the percentage of customers that the company had one year ago who remain live customers today. The converse of that percentage, the defection or decay rate could equally be used. In markets where multiple sourcing is common, such as food retailing or many B2B markets for raw materials, this is a weak measure of loyalty. Customers could have used the
Chapter eleven
5/7/07
09:58
Page 181
Monitoring performance over time
company within the last year whilst buying far more from competitors. The measure is also unsuited to markets where the product or service is typically bought less than once a year. These problems can be alleviated by reducing the time period in promiscuous markets and extending it in those with a long purchasing cycle. Rather like satisfaction, retention is an essential pre-requisite of loyalty rather than an end in itself, but it is a crucial measure because it is the first step in the process. If retention rates are too low, companies never achieve the real financial benefits of customer loyalty. 11.8.2 Related sales One of loyalty’s main financial benefits is that loyal customers generate more revenue due to their greater usage of the company’s products and/or services. A very simple measure of this aspect of loyalty is the number of a company’s products or services bought by the customer. A bank for example may offer its customers a range of insurance products, loans, mortgages, life assurance and credit cards as well as the core banking product. A car dealership could monitor its customers’ purchase of additional vehicles, their use of servicing, their purchase of accessories and their adoption of other services such as insurance or ‘experiences’ (e.g. track driving or off road driving days). In both cases, monitoring the behaviour of the family unit will be a better indicator of loyalty than that of the individual customer – an obvious measure that is incredibly under-utilised by many organisations. For companies with a huge range of products, like supermarkets, monitoring category usage would be more appropriate, whilst for single product organisations the amount of usage is the only feasible measure, like the per-customer spend figure reported for Orange in Chapter 1. 11.8.3 Referrals Referrals are extremely valuable because as well as reducing customer acquisition costs, new customers acquired through recommendation are more profitable than those that come through advertising or other marketing programmes22. Accurate monitoring of recommendation is rarely achieved by organisations because it takes real effort. Every new customer must be thoroughly interrogated about how and why they became a customer, and if through referral, which current customer had recommended them. Both the recommender and the referred customer must be flagged on the database. Suitable measures of recommendation are the percentage of new customers acquired through referrals each year and the percentage of existing customers that recommended a new customer. It is even better if the number of times a customer has recommended is also recorded. Clearly, this information is a much more accurate measure of customer loyalty than a ‘net promoter’ score21 generated by a ‘likelihood to recommend’ question. 11.8.4 Customer lifetime value By far the best behavioural measure of loyalty is customer lifetime value, particularly in view of Harvard’s assertion that the most loyal customers are up to 138 times more profitable than the least loyal22. Whilst this is a complex subject, there are some
181
Chapter eleven
182
5/7/07
09:58
Page 182
Monitoring performance over time
fundamental principles that underpin an accurate calculation of customer lifetime value. First, customers must be divided into cohorts, usually based on the year that they first became a customer. Behavioural data is then monitored and compared across customer cohorts, typically demonstrating that a Year 5 or 6 customer, for example, is far more valuable than a first or second year customer. A crude, but nevertheless useful, measure of customer lifetime value would simply be average percustomer spend in each cohort, although this would seriously under-estimate the true value of customer loyalty. Adding a recommendation value would be a significant step in addressing this deficiency. A simple way of valuing referrals is to base it on the average cost of acquiring a new customer through sales and marketing activities. This does need to be the full cost, including the salaries and overheads of all sales and marketing departments as well as spend on sales and marketing activities. This total cost is then simply divided by the number of new customers acquired in the financial year, excluding referrals. Although accurate monitoring of referrals is highly desirable, where data is incomplete, it is still worth including a recommendation value by allocating customers of unknown origin in the correct proportions to referral and marketing channels. There are many ways of improving the sophistication of a customer lifetime value measure, such as basing the figures on profit rather than sales, including a ‘cost of servicing’ figure (typically higher for new customers), or adding to the referral value an amount that reflects the known future premium of referred customers compared to customers won through sales and marketing. An accurate measure of customer lifetime value will correlate far more strongly with the company’s financial performance than any other measure of loyalty. Where good customer lifetime value measures exist, companies can derive added benefit from their CSM processes by linking satisfaction with customer lifetime value. They can understand, for example, what’s most important to the most valuable customers, or what causes customers to defect in the cohort with the lowest retention rate.
Conclusions 1. The least useful headline measure of customer satisfaction is provided by a single overall satisfaction question, especially if it uses a 5-point verbal scale since it will be far too insensitive to detect the relatively small movements in customer satisfaction that typically occur. 2. The major benefit of a composite index over a single question is its much greater reliability and stability23 since the random measurement error is largely cancelled out across its component questions resulting in a much more accurate measure. 3. If the index is to reflect as closely as possible the customer’s satisfaction judgement, it should be weighted according to the importance of its component requirements.
Chapter eleven
5/7/07
09:58
Page 183
Monitoring performance over time
4. Weighting factors should be empirically justifiable rather than judgemental. Of the empirical options, stated importance is better than statistically derived impact measures because it more closely reflects the relative importance of the requirements to customers. 5. For the greatest accuracy a weighted index should be calculated for each individual respondent. All the individual indices are then averaged to produce a customer satisfaction index for the organisation. 6. Provided the survey is based on the ‘lens of the customer’, the customer satisfaction index is comparable over time even if the questions change (as customers’ priorities evolve) and is comparable across organisations since it is a measure of the extent to which an organisation is meeting the requirements of its customers. 7. The reliability of an index is a combination of its precision or accuracy (the confidence interval) and its repeatability (the confidence level). 95% is the normal confidence level. 8. The confidence interval, or margin of error, will be affected by the standard deviation but will be determined mainly by the sample size. 9. For customer satisfaction surveys a sample of 200 will typically have a confidence interval of around +/- 1.5%. A sample of 500 is necessary to be reasonably certain of a confidence interval below +/- 1%. 10. A lower level of precision is usually acceptable for segment results. A sub-group sample of 50 will typically have a confidence interval for CSM of +/- 4% to 5%, with samples of 100 achieving confidence intervals of around 2 to 2.5%. 11. A headline measure of loyalty can be produced from survey questions and should also be an index rather than a single question but is not usually weighted. 12. The best measures of loyalty are based on real customer behaviour, with customer lifetime value being the most useful, but few organisations have the data capability to produce a worthwhile measure of customer lifetime value.
References 1. Galilei, Galileo (1633) "Dialogue concerning the two chief world systems – Ptolemaic and Copernican”, trans Drake, Stillman (1953), Third day discussion, University of California Press, Berkeley 2. Pearson and Kendall (1970) "Studies in the history of statistics and probability”, Charles Griffin and Co, London 3. Oppenheim, A N (1992) "Questionnaire Design, Interviewing and Attitude Measurement”, Pinter Publishers, London 4. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other measurement issues”, American Marketing Association, Chicago, Illinois 5. Helsdingen and de Vries (1999) "Services marketing and management: An international perspective”, John Wiley and Sons, Chichester, New Jersey
183
Chapter eleven
184
5/7/07
09:58
Page 184
Monitoring performance over time
6. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the consumer”, McGraw-Hill, New York 7. Teas, R K (1993) "Expectations, performance evaluation and consumers’ perceptions of quality”, Journal of Marketing 57 8. White and Schneider (2000) "Climbing the Commitment Ladder: The role of expectations disconfirmation on customers' behavioral intentions”, Journal of Service Research 2(3) 9. Parasuraman, Berry and Zeithaml (1985) "A conceptual model of service quality and its implications for future research”, Journal of Marketing 49(4) 10. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for measuring perceptions of service quality”, Journal of Retailing 64(1) 11. Gummesson, E (1992) "Quality dimensions: What to measure in service organizations”, in Swartz, Bowen and Brown, (Eds) "Advances in services marketing and management”, JAI Press, Greenwich CT 12. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 13. Zeithaml, Berry and Parasuraman (1990) "Delivering Quality Service”, Free Press, New York 14. Cronin and Taylor (1992) "Measuring service quality: An examination and extension”, Journal of Marketing 56 15. Parasuraman, Berry and Zeithaml (1991) "Refinement and reassessment of the SERVQUAL scale”, Journal of Retailing 79 16. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California 17. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 18. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ Quality Press, Milwaukee 19. Cronin and Taylor (1994) "SERVPERF versus SERVQUAL: Reconciling performance-based and performance-minus-expectations measurement of service quality”, Journal of Marketing 58 20. Hill and Alexander (2006) "The Handbook of Customer Satisfaction Measurement” 3rd Edition, Gower, Aldershot 21. Reichheld, Frederick (2003) "The One Number you Need to Grow”, Harvard Business Review 81, (December) 22. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, New York
Chapter twelve
5/7/07
09:59
Page 185
Actionable outcomes
CHAPTER TWELVE
Actionable outcomes Since the purpose of measuring customer satisfaction is to improve it, the first priority of the CSM process is to produce actionable outcomes that will drive improvement in customer satisfaction. This may sound obvious, but it is easy to become obsessed with the survey process because it is vital that a sound methodology is followed to ensure accurate results. For that reason, the validity and credibility of the methodology has been the overwhelming preoccupation of this book. So far. But now it is time to redress the balance, because although a reliable survey is essential, it is not an end in itself but the means to the much more important objective of improving customer satisfaction. This chapter will therefore begin to focus on techniques and advice that will maximise the likelihood of a CSM survey leading to an increase in customer satisfaction and loyalty.
At a glance This chapter will: a) Review the conclusions of the academic literature for turning CSM survey data into outcomes. b) Explain gap theory - the traditional way of setting PFIs (priorities for improvement). c) Detail other factors that can be used to determine PFIs. d) Consider ways of benchmarking CSM data. e) Outline techniques to maximise clarity of reporting.
12.1 Using CSM data to make decisions As early as the 1980s both academics and experienced practitioners abandoned the idea that outcomes would be based simply on the lowest satisfaction or performance scores1. This was based on the realisation that customers’ satisfaction judgements were based not on an objective evaluation of the organisation’s performance but on a subjective opinion of the extent to which it had met their requirements2. This resulted in ‘gap theory’, which quite simply based satisfaction improvement outcomes on the size of the gap between importance and satisfaction scores, a large gap
185
Chapter twelve
186
5/7/07
09:59
Page 186
Actionable outcomes
indicating that the organisation had fallen well short of meeting customers’ ‘requirements’1,3,4,5,6,7. There has been much debate and confusion over whether customers’ requirements refer to expectations or needs. Parasuraman et al considerably added to the confusion when asserting that the meaning of ‘expectations’ was different when applied to customer satisfaction (predictions by the customer of what is likely to happen during a service encounter) and service quality (the desires or wants of the customer, or what the supplier should deliver). In practice, most academics and practitioners would see the former as a definition of expectations and the latter as a definition of requirements or needs. Most also conclude that expectations are difficult if not impossible to measure8,9. In particular, measuring customers’ expectations after the service encounter (which surveys inevitably do) is flawed because the expectations are usually modified by the experience8,10. KEY POINT It is the relative importance of customers’ requirements that should be measured for CSM. Customer expectations do not provide a suitable basis for measurement. Consequently, requirements are considered more suitable than expectations as measurable antecedents of the customer experience, and it is the relative importance of the requirements that is of interest. A CSM survey therefore remains faithful to our original definition of customer satisfaction by measuring the extent to which the supplier meets its customers’ requirements11. This accords with the intuitively sound notion that there is little commercial value in being good at something that doesn’t matter to customers. On the contrary, customer satisfaction is best achieved by ‘doing best what matters most to customers’ and failure to do this will be reflected in satisfaction gaps12. Whether presented as gaps between importance and satisfaction scores (Figure 12.1 and 12.2) or in the form of a two by two importance-satisfaction matrix13, as shown in Figure 12.3, the principle is the same – the supplier’s priorities for improvement will be the factors where it is least meeting its customers’ requirements12,14. KEY POINT To improve customer satisfaction, organisations should focus resources on areas where they are least meeting customers’ requirements. However, in more recent years it has been increasingly recognised that there are more powerful analytical models based on inter-dependence techniques that offer a more sophisticated approach to the analysis of customer satisfaction data15,16,17,18. In order to improve further, companies already achieving good levels of customer satisfaction would be well advised to utilise such approaches to maximise their understanding of the drivers of customer satisfaction since it becomes more difficult to improve customer satisfaction as levels increase. We will examine more sophisticated
Chapter twelve
5/7/07
09:59
Page 187
Actionable outcomes
approaches later in this book, but since many organisations have poor levels of customer satisfaction because they do not meet customers’ requirements, the basic ‘satisfaction gaps’ approach remains perfectly adequate for identifying appropriate areas for improvement. This is therefore where we will start our examination of producing actionable outcomes.
12.2 Satisfaction gaps If we return to our retailer’s data, we can see an illustration of gap analysis in Figure 12.1. Where the satisfaction score for a requirement is lower than the importance score there is a satisfaction gap, indicating that the organisation is not meeting customers’ requirements. Not rocket science, gap analysis indicates that if the satisfaction bar is shorter than the importance one the company may have a problem! But that is the main strength of the chart. It is clear, simple and obvious. Anybody in the organisation can look at it, understand it and draw the right conclusions. FIGURE 12.1 Meeting customers’ requirements 6.5
7
7.5
8
8.5
9
9.5
10
Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
Importance Satisfaction
There are some areas, such as ‘choice of products’ and ‘price level’ where the company concerned is more or less meeting customers’ requirements. There are some, such as ‘staff appearance’ and ‘store layout’ where customers’ requirements are being exceeded. Most importantly there are some attributes where the company is falling short and these are the ones it needs to focus on if it wants to improve customer satisfaction. These are the PFIs, the priorities for improvement.
187
Chapter twelve
188
5/7/07
09:59
Page 188
Actionable outcomes
FIGURE 12.2 Satisfaction gaps -1.5
-1
-0.5
0
0.5
1
1.5
Expertise of staff Speed of service Quality of products Price level Choice of products Staff helpfulness Layout of store Staff appearance
Figure 12.2 simply shows the size of the satisfaction gap, calculated by subtracting the satisfaction score from the importance score. Where customers’ requirements are being exceeded, as on ‘staff appearance’, the gaps chart shows a negative satisfaction gap. The bigger the gap, the bigger the problem, and you can see from Figures 12.1 and 12.2 that the biggest PFI, the area with the greatest potential for improving customer satisfaction is not the attribute with the lowest satisfaction score (‘speed of service’) but the one with the largest gap – ‘expertise of staff ’. KEY POINT The Satisfaction Gaps indicate the extent to which the organisation is meeting or failing to meet its customers’ requirements. Early forms of importance-performance analysis were typically in the form of a 2 x 2 importance-satisfaction matrix19, shown in Figure 12.3. Requirements with high importance and low satisfaction will be found towards the top left-hand corner of the matrix, so the PFIs are found in the top left-hand cell. There are three reasons why the gaps approach illustrated in Figures 12.1 and 12.2 is preferable to the matrix. 1. The charts in Figures 12.1 and 12.2 are much more clear and simple and will consequently be much better for communicating the results across the organisation. Figure 12.1 is a very effective way of encouraging staff to think about where the organisation is meeting, exceeding or failing to meet
Chapter twelve
5/7/07
09:59
Page 189
Actionable outcomes
FIGURE 12.3 Importance - satisfaction matrix 9.5
IMPROVE PERFORMANCE
MAINTAIN PERFORMANCE
Expertise of staff
Choice of products Price
Importance
9
Speed of service
8.5
Quality of products
8
Layout of store
Helpfulness of staff 7.5
Appearance of staff 7 7.5 7 SOME IMPROVEMENT
8
8.5
Satisfaction
9 9.5 OVER PERFORMANCE
customers’ requirements. It also helps them to understand that the areas in most need of attention are those where the organisation is least meeting its customers’ requirements. 2. If satisfaction and importance have been scored for 20 customer requirements, the PFI cell could be quite crowded, reducing the actionability of the results. Although the theory clearly dictates focus on the requirements closest to the top left-hand corner of the matrix, there would be temptation in many organisations to have initiatives and action plans for all the requirements in the PFI cell. If this resulted in the company trying to address too many PFIs, the effectiveness of customer satisfaction improvement strategies would be severely weakened. By contrast, Figure 12.2 provides a totally unambiguous picture of the gap sizes in order of magnitude, helping to reduce unnecessary debate about what the PFIs should be. 3. A 2 x 2 matrix is a very useful vehicle for displaying information that is difficult to compare, such as the relationship between people’s height and weight or years of full time education and subsequent salary. The scales on the x and y axes of the matrix can be based on completely different measures of varying magnitude. That’s why the 2 x 2 matrix is so useful for comparing importance and impact, as shown in Figures 5.2 and 10.3. Since the measures for importance and satisfaction are directly comparable, on the same scale, there is no need for the greater complexity of the 2 x 2 matrix.
189
Chapter twelve
190
5/7/07
09:59
Page 190
Actionable outcomes
12.3 Determining the PFIs The most effective way to improve customer satisfaction is to focus on one or a very small number of PFIs20. Making small improvements across many of the customer requirements achieves little gain as they often go unnoticed by customers, and even if they are noticed, it usually takes a lot of evidence to shift customers’ attitudes. To improve customer satisfaction therefore, big, noticeable improvements are necessary, and since most organisations have limited resources, this is feasible only if efforts are focused on just one, or a very small number of PFIs. To focus the organisation’s resources to this extent, a clearly understood and widely accepted method of determining the PFIs is essential. For organisations near the beginning of their CSM process it will often be sufficient to focus solely on the satisfaction gaps, nominating the two or three requirements with the biggest gaps as the PFIs. This has the advantage of being clearly understood and intuitively sound – the organisation is focusing on the areas where it is least meeting its customers’ requirements. When customer satisfaction is first measured, most organisations will find that they have some quite large satisfaction gaps, and these should be addressed first. Focusing solely on the satisfaction gaps also has the great merit of minimising unproductive debate about what to address and maximising time and effort devoted to improving customer satisfaction. KEY POINT For organisations with poor levels of customer satisfaction or at the beginning of their CSM journey, Satisfaction Gaps usually provide a perfectly adequate basis for selecting PFIs. As the organisation closes its most obvious satisfaction gaps, it will need to take more factors into consideration when determining its PFIs. Our hypothetical retailer, for example, could base its PFIs on a combination of the following five factors, most of which we outlined in the chapter on basic analysis: 1. Satisfaction gap The most important factor will remain the size of the gap. Normally a greater gain in customer satisfaction will be achieved by closing a large gap rather than a small gap. On a 10 point scale any satisfaction gap above 1 point is a concern and gaps in excess of 2 are serious. The gap sizes above 1 shown in Figure 12.2 suggest that ‘expertise of staff ’ and ‘speed of service’ should be PFIs. 2. Satisfaction drivers Taking impact as well as importance into account, the satisfaction drivers were shown in Figure 10.3. They play a prominent role in customers’ judgements of the company and they also point to ‘expertise of staff’ and ‘speed of service’ as the PFIs. 3. Dissatisfaction drivers These are the factors that are most irritating customers. Regardless of the average
Chapter twelve
5/7/07
09:59
Page 191
Actionable outcomes
satisfaction scores achieved these are the areas where the most customers are giving very low scores and in the case of our retailer they are ‘expertise of staff ’, ‘speed of service’ and ‘helpfulness of staff ’ (see Figure 10.9). 4. Loyalty differentiators If a company can improve its performance on the loyalty differentiators, it will strengthen the loyalty of its most loyal customers and reduce dissatisfaction and defection amongst its least loyal (see Figure 10.11). 5. Business impact Some PFIs will be more difficult, more time consuming and more costly to address than others. Therefore, the decision to invest in customer satisfaction improvement will often be a trade off between the cost of making the improvements and the potential gain from doing so. To clarify this business impact decision it is helpful to plot the potential satisfaction gain (shown on the x axis and based on the size of the satisfaction gap) against the cost and difficulty of making the necessary improvements. A business impact matrix for the retailer is shown in Figure 12.4. Based on categorising customer requirements into three broad bands according to the cost and difficulty of making improvements, the Business Impact Matrix illustrates where the most costeffective gains can be made. As shown in the chart, some requirements, particularly those in the cells in the bottom right hand corner, such as ‘speed of service’ and ‘staff helpfulness’, should bring high returns due to their large satisfaction gaps and low cost. However, requirements in the top left hand corner, such as ‘layout of the store’, would bring little benefit, due to low or non-existent satisfaction gaps and high relative cost. Whilst we are not advocating avoidance of the difficult issues, it is highly beneficial if there are one or more ‘quick wins’ that can be addressed relatively easily FIGURE 12.4 Business impact matrix High
Cost
Layout of store
Medium
Price level
Quality of products Staff appearance
Choice of products
Speed of service
Low
Staff helpfulness
Expertise of staff
Low
Benefit
High
191
Chapter twelve
09:59
Page 192
Actionable outcomes
since it is very helpful if both customers and employees can see prompt action being taken as a direct result of the survey. KEY POINT As organisationsâ&#x20AC;&#x2122; CSM processes mature, more factors will be used to determine PFIs. An outcomes table is very useful for summarising the PFI selection process. If the PFIs are derived from several sources of information, it is very helpful to summarise everything in one easy-to-assimilate visual format such as the outcomes table shown in Figure 12.5. This enables everyone in the organisation to quickly understand the reasons behind the selection of the PFIs, which minimises unproductive debate and moves the company as swiftly as possible into the implementation phase.
TOTAL
BUSINESS IMPACT
LOYALTY DIFFERENTIATORS
DISSATISFACTION DRIVERS
Requirements
SATISFACTION DRIVERS
FIGURE 12.5 Outcomes table SATISFACTION GAP
192
5/7/07
Choice of products Expertise of staff Price Speed of service Quality of products Layout of store Helpfulness of staff Appearance of staff
12.4 Benchmarking satisfaction Organisations are increasingly interested in benchmarking their performance across all aspects of business management, hence the growing popularity of balanced scorecard21,22 approaches to management information and the desire of many organisations to have their balanced scorecard measures externally audited by bodies such as EFQM23 and Malcolm Baldrige24. Some areas of business performance lend themselves much more readily than others to comparison against other organisations. Whilst many tangible metrics such as sales per employee, debtor days,
Chapter twelve
5/7/07
09:59
Page 193
Actionable outcomes
staff turnover, can be easily benchmarked across companies and sectors, customer satisfaction measures can be much harder to compare. The main difficulties arise from use of different methodologies and from asking different questions. 12.4.1 Different methodologies If different methodologies are used benchmarking is impossible. There is no way of comparing a measure of customer satisfaction generated by one company using a 10point numerical scale with one produced by another organisation using a 5-point verbal scale. Anyone wishing to change from one scale to the other whilst maintaining some tracking comparability can do so only by duplicating several questions on the same questionnaire with the same sample, comparing the outcomes as a percentage of maximum and calculating a conversion factor accordingly. KEY POINT Only customer satisfaction measures based on compatible methodologies can be benchmarked. 12.4.2 Different questions If one organisation has asked exactly the same questions as another using the same methodology, the two can obviously compare the answers to the questions. However, unless all aspects of the first companyâ&#x20AC;&#x2122;s operations are identical to those of the second company, this approach is very unlikely to compare accurate measures of satisfaction, since we know that asking the right questions, based on the lens of the customer, is a fundamental element of a measure that truly reflects how satisfied or dissatisfied customers feel. Put simply, unless you use the same criteria that the customers use to judge the organisation, a survey will never arrive at the same satisfaction judgement as the customers. Consequently, it is almost inevitable that different organisations, even in the same sector, will be asking at least some different questions. 12.4.3 How to compare Quite simply, and logically, organisations should make comparisons in the same way that customers do. At the overall level customers make judgements based on the extent to which suppliers have met their requirements â&#x20AC;&#x201C; whatever those requirements are. As we know from Chapter 11, the most reliable measure of overall customer satisfaction is a composite index with the individual components weighted according to their importance to customers. Organisations can therefore compare this measure of their success in meeting customersâ&#x20AC;&#x2122; requirements with the customer satisfaction index of any other organisations across all sectors. 12.4.4 Comparisons across sectors In fact, it is essential to compare across sectors since this is precisely what customers do. Customers typically base their expectations on the best service they have encountered
193
Chapter twelve
194
5/7/07
09:59
Page 194
Actionable outcomes
across the wide range of different suppliers they use from all sectors. Moreover, many successful organisations pursue best practice benchmarking outside their own sector as they see this as a much better way of making a paradigm shift than if they look solely at their own industry where companies are broadly doing the same things. For example, Southwest Airlines achieved the fastest airport turnaround time in its industry by benchmarking itself against Formula 1 teams in the pits. Sewell Cadillac benchmarks its cleanliness against hospitals and has adapted several medical technologies to help its mechanics achieve better results when diagnosing and fixing car faults. KEY POINT Customer satisfaction should be benchmarked across sectors. A weighted customer satisfaction index from a survey based on the lens of the customer provides a perfect basis for cross-industry benchmarking since it is a measure of the extent to which the organisation has met its customers’ requirements. 12.4.5 Benchmarking databases The American Customer Satisfaction Index25 is by far the biggest customer satisfaction benchmarking database since it claims to cover 60% of the US economy. At the time of writing, there is no UK equivalent. Closest is The Leadership Factor’s Satisfaction Benchmark database26 compiled using data from around 500 customer satisfaction surveys per annum across all sectors. However, the Institute of Customer Service has launched a UK Customer Satisfaction Index27 in 2007 which, over time, should offer a benchmarking resource similar to that provided by the American Customer Satisfaction Index. 12.4.6 Incorporating benchmarking into survey outcomes It is always very useful to incorporate benchmarking into customer satisfaction survey outcomes at two levels. First, it is very helpful to know how good the organisation’s customer satisfaction index is and this can be achieved only by seeing it from the customers’ perspective – how the company’s service compares with other organisations generally. Going back to our retailer, their customer satisfaction index, based on the eight questions we have importance and satisfaction scores for, would be 82.2%. Figure 12.6, based on The Leadership Factor’s Satisfaction Benchmark database26, shows that the retailer is delivering a good level of customer satisfaction, but, compared with other organisations, not a very good one. It demonstrates to the retailer and its employees that there is plenty of opportunity to improve. As well as the overall index, it can be even more useful to benchmark the organisation’s performance on the individual requirements measured. Figure 12.7 shows that the retailer is considerably worse than other companies at satisfying its customers on ‘quality of products’ and that its relative performance is poor on ‘staff helpfulness’.
Chapter twelve
5/7/07
09:59
Page 195
Actionable outcomes
FIGURE 12.6 Benchmarking the satisfaction index
Top quartile
ABC Ltd: 82.2%
Bottom quartile
FIGURE 12.7 Benchmarking the requirements Worse Better Price Choice of product Layout of store Staff appearance Speed of service Expertise of staff Staff helpfulness Quality of products
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
195
Chapter twelve
09:59
Page 196
Actionable outcomes
By contrast, although ‘price’ wasn’t the retailer’s highest satisfaction score, it is very high compared with customer satisfaction on price generally. Price is a good example of an individual requirement that benefits tremendously from benchmarking. Since customers will always be reluctant to express delight with prices, it is very common for companies to record low satisfaction scores for price. Based solely on the survey data, price will often have a large satisfaction gap and will appear to be a PFI. However, when benchmarked, an apparently low satisfaction score for price is often shown to be close to the average achieved by other companies, and therefore not a cause for concern. Looking at the benchmarking data in Figure 12.7, our retailer should spot an opportunity to increase its prices to fund improvements on ‘speed of service’ and ‘expertise of staff ’. The value of benchmarking can be seen if a column is added to our retailer’s outcomes table to incorporate the new information it has provided. Shown in Figure 12.8, the revised benchmarking table demonstrates that ‘staff helpfulness’ is an equal concern with ‘speed of service’ and ‘expertise of staff ’ and that ‘quality of products’ is a bigger problem than indicated by the earlier data.
TOTAL
BUSINESS IMPACT
BENCHMARKING
LOYALTY DIFFERENTIATORS
DISSATISFACTION DRIVERS
Requirements
SATISFACTION DRIVERS
FIGURE 12.8 Revised outcomes table SATISFACTION GAP
196
5/7/07
Choice of products Expertise of staff Price Speed of service Quality of products Layout of store Helpfulness of staff Appearance of staff
12.5: Clarity of reporting The outcomes table is a good example of clarity of reporting. Most people in an organisation do not have the time or the inclination to wade through large volumes of survey output. They need very concise information, preferably in a visual form that is
Chapter twelve
5/7/07
09:59
Page 197
Actionable outcomes
easy to understand and leads to authoritative recommendations. Too much information or lack of definite conclusions lead to unproductive debate and delays the development of satisfaction improvement action plans. Although someone in the organisation needs sufficient understanding of the research to verify its accuracy, it is not productive to cascade details such as segment splits, standard deviations, confidence intervals etc. It is much more useful to put effort into developing reporting media that enable relevant employees to receive just as much information as they need to motivate and help them to improve customer satisfaction. Our retailer, for example, might produce an action map like the one shown in Figure 12.9. It is based on the fact that whilst the survey data will highlight company-wide PFIs, it will not be possible for all staff across the organisation to contribute equally to addressing them. The action map therefore looks at the extent to which different parts of the organisation can make a difference to the PFIs and allocates PFI responsibilities by department, team, region and store as appropriate. It also provides a clear visual guide for management summarising who is responsible for what, thus helping them to monitor the implementation of satisfaction improvement action plans. KEY POINT To facilitate action to improve customer satisfaction, reporting of survey data should be as clear and simple as possible. FIGURE 12.9 Action Map
Oxford
Canterbury
Leicester
Stores
Swindon
Leeds
Marketing
Management
Facilities
Operations
Requirement
Personnel
Major PFI: Minor PFI: All Clear:
Customer service
ACTION MAP Central functions
Expertise of staff Speed of service Quality of products Price Choice of products Staff helpfulness Layout of store Staff appearance
If the retailer is large it could have several hundred stores. Action maps would therefore be cascaded from national to regional to area level. It would also be useful
197
Chapter twelve
198
5/7/07
09:59
Page 198
Actionable outcomes
to consider alternative media such as the web for reporting the results. Interactive web reporting enables the results database to be stored on a secure website with authorised staff able to interrogate it according to virtually any criteria they wish. The store manager in Oxford, for example, may want to look up the satisfaction scores achieved by stores in locations with a similar demographic profile such as Cambridge or Bath. Thinking about the PFIs for his own store he might want to interrogate the database to discover which stores achieved the highest satisfaction scores for ‘expertise of staff ’ and ‘speed of service’ so he can learn from their success. KEY POINT Internal benchmarking is a very effective tool for improving customer satisfaction. Any organisation that has multiple stores, branches, sites, business units etc. will find internal benchmarking extremely effective in driving customer satisfaction improvement strategies. For such companies, reporting should therefore be focused as much as possible on making comparisons across the different units. Benchmarking charts at overall and attribute level like the ones shown in Figures 12.6 and 12.7 should be adapted to make internal comparisons. This approach may not be universally popular, especially with managers of poorer performing units, but it provides a powerful incentive to improve since nobody wants to be bottom of the internal league table! This approach was used very successfully by Enterprise Rent-ACar (see section 3.4.1) to improve customer satisfaction.
Conclusions 1. The most effective way to improve customer satisfaction is to focus on a very small number of PFIs (priorities for improvement) rather than diluting actions too thinly across too many customer requirements. 2. The starting point for prioritising improvements is gap analysis, which is the difference between the satisfaction and importance scores, a satisfaction score more than one point below its corresponding importance score demonstrating that the organisation is not meeting customers’ requirements. 3. For clarity of reporting, use a straight comparison of importance and satisfaction scores illustrated in a simple bar chart rather than the less obvious satisfactionimportance matrix. 4. For organisations at the start of their CSM journey gap analysis will be sufficient for highlighting PFIs, but once the quick wins have been successfully addressed it will be helpful to use additional factors to determine PFIs, including satisfaction drivers, dissatisfaction drivers, loyalty differentiators, business impact and benchmarking.
Chapter twelve
5/7/07
09:59
Page 199
Actionable outcomes
5. Since a weighted customer satisfaction index is a measure of the extent to which an organisation is meeting its customers’ requirements, it can be benchmarked against any organisation, whatever questions have been asked on its CSM survey – provided the questions are based on the lens of the customer. 6. Organisations should benchmark their index and the individual customer requirements against other organisations from outside as well as inside their own sector. 7. Companies with multiple business units should use internal benchmarking as a powerful driver of satisfaction improvement. 8. The outcomes of a CSM survey should be reported very widely around the organisation, but the information reported should be concise, clear and simple with authoritative conclusions.
References 1. Parasuraman, Berry and Zeithaml (1988) "SERVQUAL: a multiple-item scale for measuring perceptions of service quality”, Journal of Retailing 64(1) 2. Peters and Austin (1986) "A Passion for Excellence”, William Collins, Glasgow 3. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 4. Helsdingen and de Vries (1999) "Services marketing and management: An international perspective”, John Wiley and Sons, Chichester, New Jersey 5. Parasuraman, Berry and Zeithaml (1994) "Reassessment of expectations as a comparison standard in measuring service quality: Implications for further research”, Journal of Marketing 58 6. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the consumer”, McGraw-Hill, New York 7. Churchill and Suprenant (1982) "An investigation into the determinants of customer satisfaction”, Journal of Marketing Research 19 8. Carman, J M (1990) "Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions”, Journal of Retailing 66(1) 9. Teas, R K (1993) "Expectations, performance evaluation and consumers’ perceptions of quality”, Journal of Marketing 57 10. Clow and Vorhies (1993) "Building a competitive advantage for service firms”, Journal of Services Marketing 7(1) 11. Ennew, Reed and Binks (1993) "Importance-performance analysis and the measurement of service quality” European Journal of Marketing 27(2) 12. Hill, Brierley and MacDougall (2003) "How to Measure Customer Satisfaction”, Gower, Aldershot 13. Hemmasi, Strong and Taylor (1994) "Measuring service quality for planning and analysis in service firms”, Journal of Applied Business Research 10(4)
199
Chapter twelve
200
5/7/07
09:59
Page 200
Actionable outcomes
14. Joseph M, McClure and Joseph B (1999) "Service quality in the banking sector: The impact of technology on service delivery”, The International Journal of Bank Marketing 17(4) 15. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California 16. Allen and Rao (2000) "Analysis of Customer Satisfaction Data”, ASQ Quality Press, Milwaukee 17. Ryan, Buzas, and Ramaswamy (1995) "Making Customer Satisfaction Measurement a Power Tool", Marketing Research 7 11-16, (Summer) 18. Fornell, Claes (2001) "The Science of Satisfaction”, Harvard Business Review 71, (March-April) 19. Martilla J A and James J C (1977) "Importance-Performance Analysis”, Journal of Marketing 41, (January) 20. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty Measurement” 3rd Edition, Gower, Aldershot 21. Kaplan and Norton (1996) "The Balanced Scorecard”, Harvard Business School Press, Boston 22. The Balanced Scorecard Institute: www.balancedscorecard.org 23. www.efqm.org 24. www.baldrige.nist.gov 25. The American Customer Satisfaction Index, www.theacsi.org 26. The Leadership Factor’s customer satisfaction benchmarking database: www.leadershipfactor.com/surveys/ 27. See the Institute of Customer Service and UKCSI: www.instituteofcustomerservice.com and www.ukcsi.com
Chapter thirteen
5/7/07
10:00
Page 201
Comparisons with competitors
CHAPTER THIRTEEN
Comparisons with competitors As we said in the last chapter, customers tend to benchmark organisations very widely, comparing them with their experiences across many different sectors. Often customersâ&#x20AC;&#x2122; recent experience is limited to one organisation per sector. They simply donâ&#x20AC;&#x2122;t currently deal with more than one local council, one mortgage lender, one mobile phone provider or one doctor. At other times, however, customers are much more active in making comparisons; when they bought their house and needed a new mortgage for example or when their annual phone contract or insurance policy was due for renewal. In some markets customers may habitually use one supermarket, for example, or drive one make of car for three or four years before replacing it, but are nevertheless frequently making comparisons between competing suppliers even though they are not switching. In other markets customer promiscuity is much more prevalent. In most industrial supply markets, for example, dual or multiple sourcing is widespread. In the leisure sector customers often frequent more than one restaurant, tourist destination or theatre. It is therefore very useful for some companies to understand how customers make comparisons and choices between competing suppliers. This chapter explains how to do it.
At a glance In this chapter we will: a) Outline a very simple survey approach. b) Consider the methodological implications of competitor surveys. c) Explain how to conduct a market standing survey. d) Consider the added dimension of relative perceived value. e) Explore switching behaviour and its drivers.
13.1 Simple comparison A very easy method of making a simple comparison with competitors can be utilised by any organisation that conducts a customer satisfaction survey. In its simplest form
201
Chapter thirteen
202
5/7/07
10:00
Page 202
Comparisons with competitors
it requires the addition of only one question to the survey, such as: “Compared with other banks / supermarkets / office equipment suppliers / etc that you are familiar with, how would you rate XYZ?” Customers can be given a simple range of options, including one for those with no experience of other suppliers, resulting in the type of output shown in Figure 13.1. FIGURE 13.1 Simple comparison Not familiar with others 8% The worst 1% Worse than most 4%
The best 11%
About the same as most 24%
Better than most 52%
This question can also be utilised by organisations such as local councils, membership bodies, charities or housing associations whose customers don’t deal with competitors but can often make comparisons against other organisations that they perceive to be broadly similar. In these circumstances, the question would typically be worded: “In your opinion, how does XYZ compare with other similar organisations?” Whether against direct competitors in a promiscuous market or against broadly similar organisations in a less competitive one, understanding of how customers make comparisons will be enhanced by adding a second question: “When making that comparison, which other organisations did you compare XYZ against?” If it is added to a customer satisfaction survey, the big disadvantage of this question in competitive markets is its biased sample. All the respondents have chosen to be customers of the company conducting the survey but they have not all chosen to deal with all its competitors. It is therefore a reasonable assumption that the sample will be more favourably disposed towards XYZ than a randomly selected sample of all buyers in the market.
Chapter thirteen
5/7/07
10:00
Page 203
Comparisons with competitors
KEY POINT A simple comparison question provides a good overview of how an organisation is seen by its customers relative to other similar organisations, but will be less useful in highly competitive markets.
13.2 Methodology implications of competitor surveys 13.2.1 Sampling To overcome the limitations of the simple comparison question described above, the customers taking part in the survey must be a random and representative sample of all buyers in the market, not just the companyâ&#x20AC;&#x2122;s own customers. This adds a considerable layer of difficulty to the survey process since most companies do not possess a comprehensive database of all the buyers in the market. In addition to their own customers, most companies do have a database of potential customers built from enquiries, quotations and sometimes bought or compiled lists of customers. In some industrial markets with relatively few customers it is quite feasible to compile a very accurate database of all buyers of a particular product or service but in mass markets the task is much more difficult. For universal purchases such as groceries in B2C markets and stationery in B2B markets, it is easy to source lists of all consumers or all businesses. However, for products that are not universal but where there are many suppliers and very large numbers of customers, e.g. personal pensions or business travel, building a truly comprehensive database of all customers in the market will be very difficult and often impossible. It is therefore acceptable to base the sample on a readily available list such as a trade directory or a bought list which, whilst almost certainly not covering the full universe of customers, will not be biased towards any of the competing suppliers. An alternative is for competitors to organise a syndicated survey through their own trade association, which can act as an â&#x20AC;&#x2DC;honest brokerâ&#x20AC;&#x2122;. The competing suppliers each provide a random sample of their own customers to the trade association which then commissions an agency to undertake the survey. When the survey is completed, the association typically provides all participating members with the same results, usually showing scores for all the competitors. Alternatively, each participant can be given their own scores compared with the market average. The disadvantage of syndicated surveys is that all companies taking part receive the same information, so it provides no competitive advantage. KEY POINT Competitor comparison surveys must be based on a random and representative sample of all the customers in the market for the product or service.
203
Chapter thirteen
204
5/7/07
10:00
Page 204
Comparisons with competitors
13.2.2 Data collection If a neutral body such as a trade association conducts the survey, a self-completion methodology is usually feasible, the association’s reputation boosting response rates and its perceived neutrality allaying respondents’ unease at scoring competing suppliers. However, if a company undertakes its own competitor comparison survey, a self-completion methodology would not usually achieve an adequate response rate. Moreover, some customers feel uneasy about giving one company scores for its competitors, so if the information is provided it may not be reliable. If a company undertakes its own competitor survey it is therefore necessary to interview customers and to use an agency to conduct them. Since it is important not to lead or bias respondents, the agency would not divulge the name of the commissioning supplier at the beginning of the interviews, but would normally be prepared to disclose it at the end. Even with this approach, some customers will be reluctant to participate, so response rates will be lower than for customer satisfaction surveys, reducing the reliability of the data. However, if a company wants this information without sharing it with competitors, this type of compromise has to be made. 13.2.3 The questionnaire The questionnaire is very similar to the customer satisfaction questionnaire described in Chapter 9, with two main differences. At the beginning of the interview, the customer’s reference set of competitors must be established, by asking a simple awareness question such as: “Can you tell me which supermarkets / household insurance companies / PC suppliers / hydraulic seal manufacturers etc you are familiar with?” This question would usually be unprompted, with the interviewer having pre-coded options for all the leading competitors plus an ‘other’ option for any smaller suppliers. In some markets, such as cars, most customers will be aware of too many brands to score in a reasonable length interview, so in these cases the interviewer would restrict the options to the main competitors in a segment. For large executive cars for example, the options might be limited to Audi, BMW, Lexus and Mercedes. Having established respondents’ reference set, they are asked to score all the competitors that they are familiar with but unlike customer satisfaction surveys it is not essential that respondents have experienced a supplier’s product or service, particularly in markets where competing suppliers have a strong image and customers have quite detailed perceptions of companies they have not used recently, or perhaps ever. Provided respondents are asked to score performance rather than satisfaction, this is quite feasible. It would also be made perfectly clear to respondents that they are scoring perceived rather than actual performance. Suitable wording would be: “I’d like to ask about your opinion of how the companies you mentioned perform on a number of different factors. I would like you to give each one a score out of ten where
Chapter thirteen
5/7/07
10:00
Page 205
Comparisons with competitors
a score of 1 out of 10 means that you believe the company performs very poorly on that factor. A score of 10 means that you believe they perform very well.” KEY POINT In competitor surveys, respondents score perceived performance rather than satisfaction. The interviewer should score each supplier the respondent mentioned on each requirement before moving on to the next factor. As with customer satisfaction surveys the requirements must also be scored for importance and this should be done after the performance scores have been collected. Of course, as with customer satisfaction surveys, the 15 to 20 customer requirements scored in the main survey would be based on the lens of the customer and identified through exploratory research.
13.3 Market standing A study based on the methodology outlined above is known as a market standing survey and should cover all the factors that influence customers’ choice and evaluation of suppliers in the market. Shown in Figure 13.2, the results enable a company to see how it compares against its competitors on all the most important supplier selection criteria used by customers. FIGURE 13.2 Competitor comparisons 6
6.5
7
7.5
8
8.5
9
9.5
10
Fruit & vegetables Stock availability Bakery Cleanliness Queue times Price Fresh meat
XYZ Ltd Competitor 1 Competitor 2
Café
Provided the customer requirements have also been scored for importance, a weighted index can be calculated for each supplier, using the formula explained in Chapter 11. Shown in Figure 13.3 for the three suppliers in this example, the outcome provides an accurate reflection of their relative market standing as perceived by customers1.
205
Chapter thirteen
206
5/7/07
10:00
Page 206
Comparisons with competitors
FIGURE 13.3 Market Standing XYZ Ltd
85.8%
Competitor 1
84.9%
Competitor 2
XYZ Ltd Competitor 1 Competitor 2
77.5%
72%
74%
76%
78%
80%
82%
84%
86%
Since customers’ attitudes precede their behaviours, Figure 13.3 will typically provide a very reliable guide to future customer behaviour in the market and its consequent impact on market share so provides a sound basis for decisions about how to improve, but the analysis will have to be slightly different from the steps outlined in Chapter 12. The next two sub-sections explain. 13.3.1 Satisfaction gaps It is always essential to ‘do best what matters most to customers’, so comparing importance and satisfaction scores remains the starting point. Initially the same as the analysis described in Chapter 12, Figure 13.4 shows the importance scores given by customers and compares them with XYZ’s satisfaction scores that we have already seen in Figure 13.2. FIGURE 13.4 Doing best what matters most 6.5
7
7.5
8
8.5
9
9.5
10
Fruit & vegetables Stock availability Bakery Cleanliness Queue times Price Fresh meat Café
Importance XYZ performance
Chapter thirteen
5/7/07
10:00
Page 207
Comparisons with competitors
Figure 13.5 simply shows the size of the satisfaction gap for each of the eight requirements. As previously, requirements where satisfaction is higher than importance, indicating that customers’ requirements are being exceeded, are shown with a negative gap. FIGURE 13.5 Satisfaction gaps for XYZ -2.5
-1.5
-2
-1
-0.5
0
0.5
1
1.5
2
Stock availability Price Queue times Fruit & vegetables Bakery Fresh meat Cleanliness Café
13.3.2 Competitor gaps To make the most impact on improving the satisfaction of its own customers, XYZ should focus on addressing a small number of PFIs (priorities for improvement) based on its biggest satisfaction gaps. However, in a highly competitive market there is also another dimension to consider; XYZ’s relative performance compared with its main competitors. Figure 13.6 shows the competitor gaps between XYZ and Competitor 1. FIGURE 13.6 Competitor gaps XYZ versus Competitor 1 -1.5
-1
-0.5
0
0.5
Queue times Price Stock availability Meat Bakery Cleanliness Fruit & vegetables Café
1
1.5
2
207
Chapter thirteen
10:00
Page 208
Comparisons with competitors
There are significant differences between figures 13.5 and 13.6. Stock availability is the obvious PFI (priority for improvement) for XYZ if based on the satisfaction gaps but queue times are a much bigger area of under-performance against Competitor 1. Since in the real world there would probably be at least 20 important customer requirements covered on the survey, and all companies have finite resources, XYZ may have to make choices between increasing the satisfaction of its own customers or closing the gaps with Competitor 1. Putting the two sets of data together into a competitor matrix would be very useful for making this decision. FIGURE 13.7 Competitor matrix 2
Stock Availability 1.5
Price
1
Satisfaction gaps
208
5/7/07
0.5
Queue times
Fruit and vegetables
0
Fresh meat
-0.5
Bakery
Cleanliness -1
-1.5
Cafe -2 -7
-15
-1
-0.5
0
0.5
1
1.5
2
Competitor gaps
Requirements closest to the top left hand corner represent XYZ’s main areas of weakness, in terms of failing to satisfy its own customers and under-performing its main competitor. Whilst ‘stock availability’ would emerge as XYZ’s main PFI based on measuring the satisfaction of its own customers, the data from across the market suggest that improving ‘queue times’ and ‘price’ could also make a big difference to XYZ’s market position against Competitor 1. Before drawing conclusions about exactly where XYZ might decide to focus its resources, it is useful to consider an alternative method of making comparisons against competitors.
13.4 Relative perceived value The disadvantage of Figure 13.7 is that it can show only two of the competing suppliers in the market place. This is overcome by an alternative technique known as relative perceived value, developed in the USA by Bradley Gale2. The rationale for the technique is that customers buy the products and services that provide the best value, in other
Chapter thirteen
5/7/07
10:00
Page 209
Comparisons with competitors
words the benefits delivered relative to the cost of obtaining the product or service. Benefits include all benefits and costs include all costs. This simply means all the things that are important to customers and is totally consistent with the methodology covered earlier in this book. A survey for relative perceived value would therefore cover customersâ&#x20AC;&#x2122; top 15 to 20 requirements, scored for importance and satisfaction. A significant difference at the analysis stage is that the requirements are split into two groups; benefits and costs. As well as the obvious question on price, costs could include some other factors. Some may be immediately identifiable as costs, such as delivery charges, but others can be indirect costs such as travel. The cost and time involved in travelling to a more distant store, for example, are real additional costs to the customer. If two stores offer equal benefits, customers will make the rational choice and frequent the closer one. Most of the requirements measured will usually be benefits, but the cost dimension may contain three or four factors. Indices are now calculated for costs and for benefits, using the methodology explained in Chapter 11. Since the relative importance of the components of the cost index and the benefits index will differ, both indices are weighted. Each competitor therefore ends up with a cost index and a benefits index and these can be plotted on the type of matrix shown in Figure 13.8. FIGURE 13.8 Competitive positioning 90
Competitor 1 ZO N
E1
Cost index
85
ZO N
E2
80
Competitor 2 ZO N
75
XYZ E3
ZO N
E4
70 75
76
77
78
79
80
81
82 83 84 Benefits index
85
86
87
88
89
90
If customers are highly satisfied with the benefits and the costs, the company would be close to the top right hand corner, like Competitor 1. XYZ offers a significantly better combination of benefits and costs than Competitor 2, but a poorer combination than Competitor 1. Movements in market share are based on a combination of two variables. First is the extent to which a company meets its own customersâ&#x20AC;&#x2122; requirements and therefore doesnâ&#x20AC;&#x2122;t lose many customers. Second is the extent to which it is seen to over- or under-perform the competition in the eyes of all buyers in the market, a sign of its attractiveness to potential customers and its ability to win new customers. Figure 13.8 shows those two dimensions and the diagonals that divide the chart into quartiles provide a good indication of the relative competitiveness of the three suppliers.
209
Chapter thirteen
10:00
Page 210
Comparisons with competitors
KEY POINT Relative perceived value offers a visual overview of the relative performance of all competitors in a market in the eyes of customers. Bradley Gale advocates one further step in the analysis process2. Instead of expressing the cost and benefit indices as a percentage of maximum, Gale suggests presenting them as a ratio of the market average. Taking the price dimension in Figure 13.8, the market average is 81.3. If 81.3 is given a value of 1 and the scores for the three competitors expressed as a ratio of it, Competitor 1’s score would be 1.09 (9% better than the market average), XYZ’s would be 0.95 and Competitor 2’s 0.96. Similarly, the market average for customer satisfaction with the benefits would be 82.9, giving Competitor 1 a relative score of 1.02 (2% better than the market average), XYZ a score of 1.05 and Competitor 2 a score of 0.93 (7% worse than the market average). The outcome is shown in Figure 13.9. Gale also adds a diagonal line to indicate ‘fair value’, which is a reasonable trade-off between benefits (or quality) and cost. A company with high prices, like XYZ could offer fair value in the eyes of customers provided it delivers very high quality or a strong combined benefits package. Equally, a company offering lower quality and fewer benefits, like Competitor 1 can provide fair value or better in the eyes of customers if it has very attractive prices. The full service versus low cost airlines would be good examples. According to Gale, companies offering better than fair value like Competitor 1 are in the ‘superior value’ zone and can expect to gain market share whilst those in the ‘inferior value’ zone like Competitor 2 will lose market share. KEY POINT Companies offering ‘superior value’ will gain market share. FIGURE 13.9 Relative perceived value 1.15
HA
VA LU E
Competitor 1
TS
FA IR
RE
SUPERIOR VALUE
RK E
1.1
0.95
Competitor 2
0.9 INFERIOR VALUE 0.85 0.85 0.9
AR KE T
SH
AR E
1
GA
IN
MA
1.05
XYZ
LO SE M
Market perceived price ratio
210
5/7/07
0.95 1 1.05 Market perceived quality ratio
1.1
1.15
Chapter thirteen
5/7/07
10:00
Page 211
Comparisons with competitors
There are two ways of generating data for the cost axis. The first is based on customers’ perception of relative costs, collected by surveys. An alternative method is to use the real prices charged by the competing suppliers. This method is shown in Figure 13.10, with the benefits / quality score on the x axis still generated by a customer survey. If real prices are used, the y axis scale is effectively reversed, a low score (low prices) now being good for customers and a high score offering lower value, so ‘superior value’ is now shown towards the bottom right hand corner of the chart. Figure 13.10 also shows a more typical situation with a fairly large number of competing suppliers offering a range of cost – quality mixes, but most lying in the fair value zone. Suppliers 5 and 8 offer superior value so can expect to gain market share whilst the market shares of Suppliers 1 and 2 will be under pressure since they offer inferior value. FIGURE 13.10 Customer value map 1.15
INFERIOR VALUE
1.1 Supplier 7
E
Price ratio
1.05
ON EZ
Supplier 2
LU VA Supplier 4 R I FA
1
0.95 0.9 0.85
Supplier 6 Supplier 8
Supplier 3 Supplier 5
Supplier 1
SUPERIOR VALUE 0.85
0.9
0.95 1 1.05 Market perceived quality ratio
1.1
1.15
However, a chart like Figure 13.10 could be misleading for a company whose target market is just one segment of the market. Offering ‘inferior value’ in the eyes of nontarget customers would not be a strategic problem in this scenario. But, a sample of the supplier’s own customers would still be biased. For an accurate picture of how well it is competing in its chosen market, the sample should be a random and representative sample of anyone that fits the relevant customer profile in the target market.
13.5 Market standing or relative perceived value? It is clear from the previous two sections that the methodology chosen to perform a competitor analysis can affect the outcome. Based on market standing (Figure 13.3) XYZ is the leading supplier in the market and could expect to gain market share over both of its main competitors. By contrast, Competitor 1 leads the market on relative perceived value whether the figures are expressed as indices (Figure 13.8) or ratios (Figure 13.9). The difference between the two approaches is the impact of price on the outcome. In the market standing example price is only one of eight components of each
211
Chapter thirteen
212
5/7/07
10:00
Page 212
Comparisons with competitors
competitor’s index, and in a real world survey may be only one of 15 or 20 requirements scored. Even though the index is weighted for importance, this probably under-estimates the role of price in customers’ supplier selection decisions in very price sensitive markets. Conversely, in the relative perceived value approach, price or cost inevitably makes as much impact on the outcome as all the other customer requirements combined, which will exaggerate the importance of price in many quality or benefits-driven markets. Companies should therefore base their choice of methodology on the price sensitivity of their target market, as explained in the next two sub-sections. 13.5.1 Low price sensitivity In markets where customers’ choices and loyalty are driven mainly by quality, service, innovation, image or other non-price benefits, market standing offers by far the most useful approach. Provided price emerges from the exploratory research as one of the 15 or 20 most important customer requirements (which it almost always does), it is included on the questionnaire and as one of the components of the index. Markets such as executive cars, state-of-the-art technological products, designer clothing, private banking, first and business class air travel, luxury hotels, Michelin star restaurants, cruises and a host of personal services or leisure experiences for the affluent, will always be benefits rather than cost-driven. Whilst price has to be broadly in line with the market, it will play a relatively small role in customer satisfaction and loyalty. The benefits, on the other hand, will be crucial to customers’ supplier selection decisions, so in this type of market it is vital that competitor comparison surveys fully explore the benefits since companies’ ability to continually offer enhancements to quality and service will be key to their continuing success. It is also essential that the overall market standing outcome accurately reflects the relatively low importance of price compared with the collectively critical influence of the range of benefits. KEY POINT For markets that are not too price sensitive, market standing will provide a better picture of competitive positioning than relative perceived value. 13.5.2 High price-sensitivity Some products and services offer minimal differentiation so compete primarily on price. Often described as commodity markets, typical examples include utilities such as gas and electricity, no-frills airlines and many B2B markets such as raw materials or basic services such as cleaning and security. In these markets price could be as important in the supplier selection decision as all other benefits combined, making relative perceived value the ideal methodology for depicting competitors’ relative performance. Indeed, since in some completely undifferentiated markets, price could conceivably account for more than 50% of customer behaviour, even the customer value map might under-emphasise its impact. In this type of market it is advisable to conduct exploratory research (see Chapter 5) with a larger than normal sample (e.g. 50 depth interviews or 10 focus groups) and to use a ‘points share’ to establish the
Chapter thirteen
5/7/07
10:00
Page 213
Comparisons with competitors
relative importance of customers’ requirements. In a points share, sometimes called the constant sum method3, customers are given a fixed number of points (typically 100) to allocate across their supplier selection criteria, reflecting their relative importance. There is no maximum or minimum number of points that must be allocated to each factor. If price is overwhelmingly important, any customer would be free to allocate all their points to price and no points to any of the other factors. Rather than simply giving customers a list of factors to score, the exercise is more likely to reflect their real-life supplier selection decisions if a purchasing scenario is presented. For example, in the market for low cost flights, the following introduction might be provided. “Imagine you are planning a long weekend break with your partner to a European destination. Three airlines offer flights between the UK and your destination. Please think about the criteria you would use to choose between the three airlines. You have 100 points to share across the factors listed. Please allocate the points according to the relative importance to you of each factor when you are choosing between the available flights. You can allocate any number of points, from 0 to 100, to each factor as long as the total number of points you allocate does not exceed 100.” Customer requirement
Points allocated
Facilities at airport e.g. shopping, catering Reputation of airline Travel time from home to UK airport Travel time from overseas airport to destination Price of ticket Availability of seat reservations Option to purchase in-flight food Provision of free in-flight food Flight time Availability of internet booking Availability of telephone booking Availability of booking through a travel agent Air miles awarded Safety record of airline Type of plane used Free luggage allowance up to 25kg Total points allocated (maximum 100 points)
Although in theory, customers could allocate an equal number of points to each requirement, this never happens in practice since some requirements are invariably more important than others. Consequently, the points share forces customers to make choices since they can’t increase the number of points allocated to one requirement without reducing those allocated to another. It is particularly useful in price sensitive markets since it will fully reflect the extent to which price is more important than the other requirements. However, it is easy for customers to complete only if there are few criteria, since if there are too many criteria to score, participants tend to focus more on the maths than on the relative importance of the requirements.
213
Chapter thirteen
214
5/7/07
10:00
Page 214
Comparisons with competitors
In fact, seven is regarded as the maximum number of factors even for the much more simple task of ranking the items in order of importance4. Since a points share is more difficult, such a low limit to the number of requirements that can be scored would make it totally inapplicable for customer satisfaction research. However, if appropriate steps are taken, this does not have to be a barrier. Firstly, using a points share for CSM should not be conducted as a short quantitative interview, and certainly not as a self-completion questionnaire, but more like the exploratory research techniques described in Chapter 5. Either a depth interview or focus group setting is suitable. It is essential to allow sufficient time to explain the exercise to participants, to enable them to ask questions and to complete the points share at their own pace. Providing the correct tools for the job is also essential, so a calculator is vital. Even better would be a laptop with the requirements pre-entered in a spreadsheet and the ‘total points allocated’ cell programmed to add the values above so that customers can experiment with their points allocations and immediately see if they are exceeding the maximum. This overcomes the major problem of customers focusing on the maths more than the relative importance of the requirements. It also enables the number of points to be increased, say to 1000, which can be useful where there is a large number of requirements, quite a few of which may be important. However, this dictates that the list of requirements to be scored by the points share must be already known, so a preliminary exploratory phase, using conventional CSM depth interview or focus group techniques (see Chapter 5) would have to be conducted to establish customers’ most important requirements. The points share exercise would then be conducted only with the 15 to 20 requirements to be used on the main survey questionnaire. If the points share data show that price does dominate customers’ supplier selection decision, using relative perceived value for the main survey analysis would be appropriate. However, if price is awarded considerably less than 50% of the points allocated by customers, even if it is the most important requirement, relative perceived value would over-estimate its importance, so market standing would be the most suitable main survey analysis method. KEY POINT A points share can be used to determine the most appropriate main survey analysis technique. Only if price is as important, or almost as important, as all the other customer requirements combined is relative perceived value suitable. The points share has been criticised for being an ipsative scale5. This means that it has no anchors or reference points, so whilst it establishes the relative importance of the factors scored, it provides no indication of the absolute level of importance of the requirements. This does make it essential to conduct conventional exploratory research before the points share to ensure that the requirements scored are those of most importance to customers.
Chapter thirteen
5/7/07
10:00
Page 215
Comparisons with competitors
13.6 Switching The main characteristic of very competitive markets is the prevalence of switching. Customers see changing from one supplier to another as relatively easy so often feel it is worth switching for even a small increase in ‘value’. They may even switch just to find out whether an alternative supplier offers better value, since it is easy to switch back if it doesn’t. In very competitive markets this promiscuity reaches its height when customers switch simply for a different customer experience, e.g. visiting a new restaurant ‘for a change’. Hofmeyr6 calls this ‘ambivalence’ and points out that in some markets customers are loyal to more than one supplier. They will sometimes visit a different restaurant even though they are completely satisfied with their favourite restaurant. In some markets therefore, companies need a much deeper understanding of customers’ loyalty attitudes and behaviour. KEY POINT In highly competitive markets companies need a detailed understanding of the customers most likely to switch suppliers. A competitor analysis must identify the customers most and least likely to switch7. This should include the company’s own customers and competitors’ customers since the company must understand how to defend its own vulnerable customers as well as how to target and attract its competitors’ most vulnerable customers. Hill and Alexander1 suggest dividing one’s own and the competitors’ customers into loyalty segments as shown in Figure 13.11. A loyalty index (see Chapter 11) would typically be used for this purpose. The components of the index need very careful consideration in promiscuous markets. Of the loyalty questions described in Chapter 9, the commitment, trust and preference questions will be particularly important. Indeed, companies will often benefit from asking several preference questions covering share of wallet and accessibility as well as attraction of competing suppliers. FIGURE 13.11 Loyalty segments Our customers
Competitor’s customers
Strongly loyal, rate our performance highly, little interest in competitors
Strongly loyal, rate competitor highly, little interest in us
Vulnerable
Apparently loyal customers but high level of inertia or some interest in competitors
Repeat buyers with competitors but little positive loyalty and some interest in us
Flirtatious
Little positive loyalty, actively interested in alternatives
Little loyalty to competitors, may be receptive to our advances
Customers showing a strong preference for alternative suppliers
Competitors’ customers who already rate us superior to their existing supplier
Faithful
Available
215
Chapter thirteen
216
5/7/07
10:00
Page 216
Comparisons with competitors
Based on the information in Figures 13.11 companies can develop detailed loyalty strategies to protect their own customers and to attract their competitors’ most vulnerable customers. FIGURE 13.12 Loyalty strategies Our customers
Competitor’s customers
Faithful
Reward loyalty, stimulate referrals, strong focus on service recovery factors
Don’t target
Vulnerable
Strong focus on PFIs, communications campaigns and loyalty schemes to build positive loyalty
May be worth targeting if competitors are failing to meet their need in areas where you perform strongly
Flirtatious
Objective assessment of costs and benefits of retaining this group. Strong focus on closing any perception gaps
Go for the jugular, especially where you believe your strengths match their priorities
Available
Cut losses. Chances of retention very low
Should be easy prey but make sure they’re not habitual switchers
Of course, few companies have the resources to successfully implement acquisition and retention strategies across all segments. Figure 13.12 illustrates the situation for a supplier with one competitor but in a very promiscuous market there will be several competitors, each with their own strengths, weaknesses and customer profiles. The starting point for strategic decisions on retention and acquisition strategies is therefore to understand the distribution of the customer base across the four loyalty segments. Figure 13.13, for example depicts a company with a very secure customer base, which should take steps to reward and protect the loyalty of its many faithful customers, whilst implementing strong measures to attract any of the competitors’ available and flirtatious customers, provided they have a suitable needs profile. FIGURE 13.13 Secure customer base 0%
10%
20%
30%
40%
50%
60%
70%
Faithful Vulnerable Flirtatious Available
By contrast, the supplier shown in Figure 13.14 has a customer base that is typical of a company devoting too much resource to winning new customers at the expense of satisfying and retaining its existing ones. This company needs to seriously re-think its strategic priorities. A relevant example is the MBNA reference from Chapter 2, where
Chapter thirteen
5/7/07
10:00
Page 217
Comparisons with competitors
the company was not keeping its customers long enough for them to become sufficiently profitable. MBNA’s ‘zero defections’ strategy based on delivering exceptionally high levels of service to targeted customers, moved the company from the 38th to the largest bank card provider in the USA over two decades8,9. FIGURE 13.14 Disloyal customer base 0%
10%
20%
30%
40%
50%
Faithful Vulnerable Flirtatious Available
KEY POINT To maximise market share, companies must efficiently focus resources on the most winnable potential customers. To optimise strategic decisions of the type outlined in Figure 13.12, a company must develop two additional areas of insight. Firstly, it must segment customers and build detailed profiles of the predominant types of customer in its own and its key competitors’ loyalty segments. Secondly, it must understand what is making customers faithful, vulnerable, flirtatious or available and what the company can do to maximise its appeal to targeted customer segments. 13.6.1 Segmentation To effectively target retention or acquisition strategies, companies must understand how customers differ across the loyalty segments. This will depend on recording sufficient classification data covering all the likely segmentation variables including demographic, geographic, behavioural and lifestyle / psychographic details for the customers surveyed. Demographic information includes age, gender, family life cycle, income, occupation, education and ethnic origin. In some markets, such as pensions or health care, customers’ attitudes and behaviours are heavily influenced by demographic factors. In others, such as groceries and cars, a more complex level of attitudinal and psychographic profiling is often necessary to fully understand the differences between loyalty segments. These may include core values such as the importance placed on individual liberty, health and fitness and family values or deeply held beliefs such as commitment to the environment, fair trade food or specific political or charitable causes. Sometimes, the best way to profile customers is to start with their tangible behaviour such as when they buy, how they buy (channel), how often they buy and how much they buy, then search for demographic, psychographic or geographic differences within the behavioural segments. This can
217
Chapter thirteen
218
5/7/07
10:00
Page 218
Comparisons with competitors
be appropriate for many leisure markets. Yet another profiling variable that often uncovers significant differences between customers is needs segmentation based on the relative importance of customers’ requirements; price versus quality-driven segments being an obvious example. One of the earliest academic authorities on customer segmentation was Yoram Wind10, who suggested some less commonly used segmentation variables, which, in his view, often provided more insight than standard classification data such as demographics. Wind’s preferred segmentation criteria included: Needs segmentation (called benefits segmentation by Wind) Product preference Product use patterns Switching behaviour Risk aversion (attracted by innovation and change versus preference for familiar things) Deal-proneness Media use (in other words, the media they use will indicate the type of person they are) Store loyalty / shopping behaviour. The last two are particularly interesting since they illustrate the idea that a company can often draw insightful conclusions about its own customers’ loyalty by asking them questions about their behaviour in other walks of life. Media usage is an obvious example. Some people are promiscuous users of media, hopping across many TV, radio and internet channels, whilst others may get their information and entertainment from one newspaper, one or two radio stations and a small range of TV channels. Rather than asking its customers direct questions about their behaviour in its own market (e.g. likelihood of renewing their policy), an insurance company might ask about their media usage and shopping behaviour. Customers that use a very small range of media and are highly loyal to one supermarket for their grocery shopping are displaying a more favourable loyalty personality than those who often shop at three or four different supermarkets and have very diverse media habits. Whatever they say about their intentions to renew their policy, customers demonstrating strong loyalty behaviours in other markets are more likely to be loyal insurance customers. KEY POINT The ability to accurately target customers will considerably improve the effectiveness of customer acquisition and customer retention strategies. 13.6.2 Profiling customers Given sufficient classification data, there are several analytical techniques that can be used to profile customer segments. They can be split into two fundamental types; ‘a priori’ and ‘post-hoc’11, sometimes called ‘verification’ and ‘discovery’. ‘A priori’
Chapter thirteen
5/7/07
10:00
Page 219
Comparisons with competitors
techniques involve the researcher comparing the data across pre-defined segments. ‘Post-hoc’ techniques start with the survey data and discover where the biggest differences in the data can be found. They then define the segments ‘after the fact’ based on groups of customers whose scores differed the most. In this section we will explain three analytical techniques that are very suitable for customer profiling. (a) Cross tabulations The obvious starting point is to split each loyalty segment into all the ‘a priori’ subgroups available from the classification data. Using confidence intervals (see Chapter 11 and Figure 11.5), statistically significant differences between the segment splits can be identified. An example based on age is shown in Figure 13.15. FIGURE 13.15 Segment splits by age Ease of contacting the call centre Helpfulness of call centre staff Keeping promises and commitments Treating you as an individual Handling of complaints Expertise of call centre staff Speed of service Helpfulness of branch staff Expertise of branch staff Information provided by XYZ Overall value for money Convenience of opening hours
Under 35s 7.70 7.66 6.51 7.32 2.84 4.59 7.32 7.94 6.82 7.60 7.17 7.39
35-55s 7.63 7.92 6.20 7.03 3.16 5.47 7.47 8.02 7.40 7.75 7.59 6.95
Over 55s 8.46 8.03 6.88 8.31 5.92 5.98 8.62 8.98 7.45 7.95 7.46 8.42
Simply looking at the information suggests that over 55s are more satisfied than younger customers. The cells with differences that are statistically significant are highlighted. Producing cross tabs for all segments of interest will identify differences across sub-groups, but is very time consuming and not always conclusive. A technique that produces a much more definitive result would therefore be much more useful for decision making. (b) Decision tree analysis A very clear, unambiguous technique that identifies the biggest differences between segments is decision tree analysis, sometimes called discriminant analysis. There are several computer programmes based on the AID algorithm (automatic interaction detection algorithm) that sequentially divide a sample into a series of sub-groups with each split chosen because it accounts for the largest part of the remaining unexplained variation. The easiest way to understand this process is to work through the decision tree shown in Figure 13.16.
219
Chapter thirteen
220
5/7/07
10:00
Page 220
Comparisons with competitors
FIGURE 13.16 Decision tree analysis 100% 1 Over 55
81.3
Under 55 54% 3
46% 2
74.8
92.4 Still Working 12% 4 88.6 ABC1 5%
C2DE 7%
10 84.9
11 90.4
Retired 34% 5 95.1 Income up to £20000 p.a 25% 8 96.4
Outside London/South East 19% 12 97.2
With children
Without children
10% 6
79.5 Income over £20000 p.a
44% 7 69.3
9% 9 92.7 London/South East 6% 13 94.0
The process starts with the entire sample, indicated by the 100% above the first box, which is numbered 1 in its top right hand corner. The 81.3 refers to the customer satisfaction index for the sample in question. This could be the entire sample, or, more usefully a sub-set of it, such as the ‘flirtatious’ segment, or a competitor’s ‘available’ segment. To keep matters simple we will assume it’s the entire sample. The data examined does not have to be overall satisfaction. It could be a loyalty index, a single question such as recommendation or an individual PFI such as ‘quality of advice’. The process then looks for the single dichotomous variable that accounts for the biggest difference in satisfaction variation across the sample and finds that it is age. It can split any variable into only two groups at each stage, and in this example the two age segments that account for the biggest variation in overall satisfaction are over and under-55s, which now become boxes 2 and 3. The over 55s are 46% of the sample and have a customer satisfaction index of 92.4 whilst the under 55s, who account for 54% of the sample, are much less satisfied at 74.8. The computer will then look for the factor that explains the most variation in the satisfaction of the over 55s and the under 55s and we can see that for the over 55s it’s whether they are working or retired and for the under 55s it’s whether or not they have children living at home. If the biggest difference within either group was a further sub-division of age (e.g. dividing the under 55s into under and over 25s), decision tree analysis would produce this outcome. The percentages shown above each box refer to that cell’s percentage of the total sample, the figures for ‘still working’ and ‘retired’ totalling the 46%, which is
Chapter thirteen
5/7/07
10:00
Page 221
Comparisons with competitors
the proportion of the total sample accounted for by the over 55s. This makes it easy to profile the most satisfied or loyal customers. The company concerned would be well advised to target retired over 55s on modest incomes outside London. As well as having very high levels of satisfaction with the benefits delivered by the company they also account for a sizeable 19% of customers in the target market. Of course, this last statement holds true only if the survey sample is representative of the market rather than the company’s own customers. KEY POINT Decision tree analysis helps a company to target its customer acquisition strategies on the type of customers that are most likely to be highly satisfied and loyal. (c) Latent class regression Latent Class Regression is a ‘post-hoc’ technique that facilitates the construction of different models along lines that may not have been suggested by existing customer segmentation data but is based on the way respondents form opinions. Most analytical techniques produce an ‘average’ picture across respondents. In some cases such a view can be misleading if this average obscures fundamental differences in the way customers form opinions. By identifying “causally homogenous” subgroups, latent class regression eliminates this problem. The example in Figure 13.17 shows the success of the technique in uncovering ‘price-driven’ and ‘quality-driven’ customers. As shown by the R2, this improves the predictive accuracy of the model. The overall model explained only 32% of the variance across customers’ perceptions of value, but once latent class regression had identified clusters of price and quality driven customers, 69% and 76% respectively of each segment’s value judgement was explained. FIGURE 13.17 Latent needs segmentation Overall sample Price
0.30
R2 =0.32 Value
Quality
0.40 Price
0.76/0.21
R2 =0.69/0.76 Value
Quality
0.24/0.82 Price - and quality - driven segments
221
Chapter thirteen
222
5/7/07
10:00
Page 222
Comparisons with competitors
Although latent class regression is a very sophisticated technique that will often uncover clusters of customers that would never be identified by ‘a priori’ segmentation techniques, its big disadvantage is that it does not identify who the customers are in the population. It is left to the researcher to study the data and to make judgements about the types of customers that make up the price and qualitydriven segments. This element of uncertainty tends to reduce its utility for decision making compared with decision tree analysis. 13.6.3 Drawing conclusions As we have stated many times in this book, the purpose of surveys is to take action to improve the business. Loyalty segmentation will improve the effectiveness of action by focusing it on the loyalty segments where the company can make the biggest difference. To accurately draw these conclusions, companies should apply the analytical techniques illustrated in Figures 13.2 to 13.10 not to the entire sample, but to each of the loyalty segments in turn. Obviously, only the scores given by the customers in the relevant segment would be used for the analysis. This does result in a need for large samples since at least 200 customers are needed in each segment for adequate reliability and there could be many segments. As well as four loyalty segments for the company there will be four for each competitor, and in some markets there could be four, five or even more competitors. Most companies should start with defending their own ‘at risk’ customers, especially those in the vulnerable segment. For this, the starting point would be Figures 13.4 and 13.5, which would show where the company is least meeting its own vulnerable customers’ requirements. Provided the questionnaire asked about attraction and accessibility of alternative suppliers, it will know which competitor its vulnerable customers would be most likely to switch to. The information displayed in Figures 13.6 and 13.7 would pinpoint how to best counter this competitive threat. Of course, depending on the percentage of customers in each loyalty segment and the prevalence of switching in the market, it may be more sensible to focus retention strategies on the flirtatious segment, or even the available one, although it is often not cost-effective to achieve sufficient attitude change in the available segment. Customer acquisition programmes are usually best targeted on competitors’ available customers followed by the flirtatious segments, but which ones? If there are five competitors there are ten segments of flirtatious and available customers. In this situation half the task is to identify the customers who are most dissatisfied with their current supplier. The second half is to pinpoint which of those are most likely to be attracted to the benefits offered by your own company. Using Figure 13.18, it is possible to match the importance scores of a competitor’s available or flirtatious customers with the performance scores they gave for your own organisation. As shown in the chart, if the café, fresh meat and prices were very
Chapter thirteen
5/7/07
10:00
Page 223
Comparisons with competitors
FIGURE 13.18 Win and keep the right customers 5
5.5
6
6.5
7
7.5
8
8.5
9
9.5
10
Fruit & vegetables Stock availability Bakery Cleanliness Queue times Price Fresh meat
Importance Performance
CafĂŠ
important to these customers, the company would not be well placed to win and keep them. However, the chart shows a very good match between the needs of these customers and the companyâ&#x20AC;&#x2122;s strengths, identifying winnable customers whose requirements the company can meet or exceed. There is no point attracting a competitorâ&#x20AC;&#x2122;s disgruntled customers who are going to be just as dissatisfied with the benefits provided by your own company. KEY POINT There is no point winning customers you are unlikely to keep. Clearly, this work is detailed and time consuming but will pay handsomely if it prevents a company from losing good customers it could have kept or if it saves it from incurring the high cost of winning customers whose long term loyalty it is unlikely to attain.
Conclusions 1. A simple comparison question provides a good overview of how an organisation is seen by its customers relative to other similar organisations, but will be less useful in highly competitive markets. 2. Competitor comparison surveys must be based on a random and representative sample of all the customers in the market for the product or service.
223
Chapter thirteen
224
5/7/07
10:00
Page 224
Comparisons with competitors
3. Telephone interviews, conducted by an independent agency will normally be used for competitor comparison surveys, but response rates will be lower than for customer satisfaction surveys. 4. In competitor surveys, respondents score perceived performance rather than satisfaction. 5. By comparing the satisfaction gaps of its own customers with the areas where it most under-performs key competitors, a company can make decisions about how to improve its market position. 6. Relative perceived value is based on the assumption that customers choose the supplier that provide the best value, in other words the benefits delivered relative to the cost of obtaining the product or service. 7. Since it covers only two variables, quality and cost, relative perceived value can provide a visual overview of the relative performance of all competitors in a market, as perceived by customers. 8. According to the relative perceived value concept, companies offering ‘superior value’ will gain market share. 9. For markets that are not too price sensitive, market standing will provide a better picture of competitive positioning than relative perceived value. 10. A points share can be used to determine the most appropriate main survey analysis technique. Only if price is as important, or almost as important as all the other customer requirements combined is relative perceived value suitable. 11. In highly competitive markets companies need a detailed understanding of the customers most likely to switch suppliers. 12. To maximise market share, companies must efficiently focus resources on the most winnable potential customers. 13. The ability to accurately target customers considerably improves the effectiveness of customer acquisition and customer retention strategies. 14. The clearest technique for identifying the biggest differences between segments in customer satisfaction and loyalty research is decision tree analysis. 15. Decision tree analysis helps a company to target its customer acquisition strategies on the type of customers that are most likely to be highly satisfied and loyal. 16. There is no point winning customers you are unlikely to keep.
References 1. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty Measurement” 3rd Edition, Gower, Aldershot 2. Gale, Bradley T (1994) "Managing Customer Value”, Free Press, New York 3. Aaker, Kumar and Day (1998) "Marketing Research”, John Wiley and Sons, New York 4. Kervin, John B (1992) "Methods for Business Research”, Harper Collins, New York
Chapter thirteen
5/7/07
10:00
Page 225
Comparisons with competitors
5. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other measurement issues”, American Marketing Association, Chicago, Illinois 6. Hofmeyr, Jan (2001) "Linking loyalty measures to profits”, American Society for Quality, The American Customer Satisfaction and Loyalty Conference, Chicago 7. Rice and Hofmeyr (2001) "Commitment-Led Marketing”, John Wiley and Sons, New York 8. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, New York 9. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 10. Wind, Yoram (1978) "Issues and Advances in Segmentation Research”, Journal of Marketing Research, (August) 11. Green, P E (1977) "A New Approach to Market Segmentation”, Business Horizons, (February) 12. McGivern, Yvonne (2003) "The Practice of Market and Social Research”, Prentice Hall / Financial Times, London 13. Dillon, Madden and Firtle (1994) "Marketing Research in a Marketing Environment”, Richard D Irwin Inc, Burr Ridge, Illinois
225
Chapter fourteen
226
5/7/07
10:01
Page 226
Advanced analysis
CHAPTER FOURTEEN
Advanced analysis: Understanding the causes and consequences of customer satisfaction The methodology covered so far in this book will enable organisations to achieve the two primary objectives of a CSM process. Firstly, and most importantly, it must provide a truly accurate reflection of how satisfied or dissatisfied customers feel about their customer experience. Secondly, it must deliver clear conclusions and actionable outcomes that enable the organisation to make improvements. For most companies, this straightforward approach will provide all they need from a CSM system. However, for a relatively small percentage of organisations that have already attained unusually high levels of customer satisfaction, more complex analytical techniques will be necessary, and these will be covered in the next two chapters.
At a glance In this chapter we will: a) Examine how asymmetry affects customer satisfaction data. b) Introduce the concept of attractive quality. c) Explore how customersâ&#x20AC;&#x2122; requirements can be classified into categories such as satisfaction maintainers and enhancers. d) Explain how to identify maintainers and enhancers. e) Consider how to highlight the best improvement opportunities where relationships are broadly linear. f) Discuss the concept of delighting the customer. g) Review the consequences of customer satisfaction, especially loyalty. h) Explain how organisations can fully understand the relationship between customer satisfaction and loyalty.
Chapter fourteen
5/7/07
10:01
Page 227
Advanced analysis
14.1 Asymmetry An important characteristic of customer satisfaction data is that it is often not linear in its relationships. Its asymmetric nature may affect conclusions about the antecedents and consequences of customer satisfaction1 â&#x20AC;&#x201C; in other words, the way organisational performance affects customer satisfaction and the way customer satisfaction affects outcomes such as loyalty. We will use the relationship between satisfaction and loyalty to illustrate the point. FIGURE 14.1 A linear relationship 100%
Loyalty
80% 60% 40% 20% 1
5 Satisfaction
10
Figure 14.1 shows a linear relationship between satisfaction and loyalty. Every 1% increase in customer satisfaction would result in a 1% gain in loyalty. This would be very convenient for planning purposes, but the real world is rarely so symmetrical. The relationship between the two variables is very unlikely to be a straight line, but much more likely to be curved, like the example shown in Figure 14.2. FIGURE 14.2 Non-linear relationship 100%
Loyalty
80% 60% 40% 20% Satisfaction 1
5
10
227
Chapter fourteen
228
5/7/07
10:01
Page 228
Advanced analysis
In Figure 14.2, the relationship between satisfaction and loyalty depends on where the company is on the curve. In the example shown, strong loyalty is achieved only at the highest levels of satisfaction. More about this later. 14.1.1 Attractive quality The origin of theories about the asymmetric nature of customer satisfaction data was the work of Japanese quality expert, Dr. Noriaki Kano2,3,4, who focused on the antecedents of customer satisfaction; the relationship between customers’ needs and the organisation’s ability to satisfy them. As long ago as 1979 Kano introduced the idea of ‘attractive quality’ to Konica. In the 1970s Konica had realised that to remain competitive its new product development programme had to radically differentiate the company from what was available at the time from competitors. However, Konica’s sales department was reporting that customers were asking for only minor modifications to the existing models. Kano advised Konica to look beyond customers’ stated needs by developing a deeper understanding of the customer’s world and uncovering their latent needs. Konica staff examined consumers’ photos at commercial processing labs and found many failures such as under and over exposure or blurred images. Customers couldn’t have been happy with many of their photos, but blamed their own inability to correctly operate the settings on the camera. Addressing these hidden customer needs created new features such as auto focus and automatic exposure setting. Kano’s advice to ‘understand the customer’s world’ has been widely adopted by CSM experts. It was the origin of Michigan University’s lens of the customer concept5. It is also the basis for using the type of projective techniques for CSM exploratory research described in Chapter 5, in order to delve beyond customers’ ‘top of mind’ requirements and uncover their less conscious needs. 14.1.2 The Kano Model Kano had basically discovered the difference between importance and impact covered in Chapters 4 and 10. His perspective was QFD (Quality Function Deployment), which was concerned with assimilating the voice of the customer into the product design process. The Kano model6, based on his concept of attractive quality, was developed to help designers to visualise product characteristics through the eyes of the customers and to stimulate debate within the design team. In particular, Kano pointed out that there are different types of customer need as well as the fact that customers’ requirements are not all equally important. Kano’s analysis was therefore originally conceived as a tool to help the design team classify and prioritise customer needs. Shown in Figure 14.3, Kano identified three types of customer needs, which he described in terms of customers’ reactions to product characteristics. 1. The ‘must be’ factors. These are the absolute basics without which it wouldn’t be possible to sell the product in the first place. They are often refered to as ‘the licence to operate’. For
Chapter fourteen
5/7/07
10:01
Page 229
Advanced analysis
example, a car that always starts and is watertight in wet weather. Failure to reach adequate quality / performance standards on ‘must be’ factors will result in very high levels of customer dissatisfaction and defection. 2. The ‘more is better’ factors. Kano also called these ‘spoken’ or ‘performance’ characteristics where each small improvement in quality or performance will make customers marginally more satisfied and vice-versa, such as the fuel consumption of a car. In his model they have a linear relationship with customer satisfaction, and are the product attributes most suited to a kaizen, or continuous improvement approach. 3. The ‘surprise and delight’ factors. These are the real ‘wow’ factors (attractive quality) that differentiate a product from its competitors, such as a car that automatically slows down in cruise control if the vehicle in front is too close. As latent needs, their absence doesn’t result in dissatisfaction since they are not expected, but when provided they will often surprise and always delight the customer. FIGURE 14.3 The Kano model High satisfaction Surprise and delight factors
3
Low quality or performance More is better factors
2
1 Must be factors
High quality or performance
Low satisfaction
14.2 Applying asymmetry to customer satisfaction research Kano’s work was aimed primarily at manufacturers and was very product-focused but more recent researchers have found that his fundamental principle of asymmetry remains valid for customer satisfaction data7,8,9, especially regarding the consequences of satisfaction. Investigating Xerox’s concern that some of its satisfied customers were defecting, Jones and Sasser10 found that ‘totally satisfied’ customers were six times more likely to repurchase than ‘merely satisfied’ customers.
229
Chapter fourteen
230
5/7/07
10:01
Page 230
Advanced analysis
14.2.1 Using asymmetry to categorise customer requirements Regarding the antecedents of satisfaction, several theories have evolved for today’s largely service-based economies. Oliver11 remained very close to Kano’s three original categories but introduced the idea that some attributes can make customers satisfied but not dissatisfied and vice-versa, using the chemical concept of valence to label his ‘directional’ theories. Hence, ‘must be’ requirements such as ‘cleanliness of toilets’ are taken for granted when good, but very conspicuous when poor. Oliver called them ‘monovalent dissatisfiers’ since they can generate dissatisfaction but not satisfaction. By contrast, he called ‘surprise and delight’ requirements ‘monovalent satisfiers’ since, he claimed, they can cause satisfaction or delight but not dissatisfaction. His ‘bivalent satisfiers’ are the ‘more is better’ factors because they can result in both satisfaction and dissatisfaction depending on their level of performance. Anderson and Mittal12 highlighted the importance of recognising the non-linearity of customer satisfaction data, particularly when looking at consequences such as loyalty and profitability. Keiningham and Vavra13 built on this for the antecedents as well as the consequences of customer satisfaction, dividing customers’ requirements into ‘satisfaction-maintaining attributes’ and ‘delight-creating attributes’. Satisfactionmaintaining attributes are expected by customers, display diminishing returns beyond a ‘parity-performance’ level and are incapable of delighting. Delight-creating attributes will surprise and delight customers, displaying accelerating returns to customer satisfaction beyond a minimal performance level. 14.2.2 Delighting the customer The concept of customer delight has been of great interest to some organisations in recent years. Kano’s original ideas were developed in the 1980s by much academic research into customers’ emotional responses to consumption14,15. A common theme, consistent with Kano and highlighted by Plutchik16, is that delight is created by a combination of surprise and joy (extreme pleasure) in the customer experience. Based on theme park customers, Oliver, Rust and Varki17 concluded that delight was based on a ‘surprising consumption experience’, which could be the provision of a benefit that was totally unexpected or alternatively a surprisingly high level of performance on a benefit that was expected. In either case, it is the element of surprise that ‘arouses’ customers and makes an enduring impact on their attitudes and future behaviour. KEY POINT Surprise is a crucial element in the ability to delight a customer. 14.2.3 Enhancers and maintainers Many CSM practitioners and authors have used asymmetry to divide customers’ requirements into two broad categories such as ‘satisfaction maintainers’ and ‘satisfaction enhancers’ or ‘satisfaction maintaining’ and ‘delight creating’ attributes13. Maintainers, such as ‘cleanliness of the toilets’, ‘on-time delivery’ or ‘reliability of the car’
Chapter fourteen
5/7/07
10:01
Page 231
Advanced analysis
will behave more like the bivalent ‘more is better’ factors at the low and mid points of the performance range. Unacceptable performance will cause extreme dissatisfaction, but improvements in performance will increase customer satisfaction. The key characteristic of maintainers is that they will reach a point where additional improvement in performance will not deliver a corresponding increase in satisfaction. Once the toilets are consistently very clean the cost of continually polishing the ceramics and stainless steel until everything gleams will not produce a return on investment. As shown in Figure 14.4, customer satisfaction climbs strongly as poor performance moves to good, but then levels off. This type of curve would therefore be classified as a ‘satisfaction maintainer’, since if customer satisfaction is at the required level, performance should simply be maintained rather than making efforts to improve it. KEY POINT Satisfaction maintainers are requirements where organisations must perform adequately to satisfy customers, but where performance above the expected level rarely translates into satisfaction gains.
Customer satisfaction
FIGURE 14.4 Satisfaction maintainer
Cleanliness of toilets
Customer requirements where improvements in performance will continue to increase satisfaction for much longer are known as ‘satisfaction enhancers’. These are a combination of the delighters and the bivalent ‘more is better’ factors since they are capable of making customers highly satisfied either by surprising them with a benefit they didn’t expect or by delivering exceptional levels of service on expected customer requirements. KEY POINT Satisfaction enhancers are requirements where exceptional performance is much more likely to be translated into very high levels of customer satisfaction.
231
Chapter fourteen
10:01
Page 232
Advanced analysis
Scott Cook, founder of American software developer Intuit, realised that building a “customer evangelist culture”19 would be fundamental to the long term success of the company. Unlike most software companies, technical staff took turns in answering customer calls, read customers’ suggestion cards and even took part in the “Follow Me Home” initiative, for which Intuit staff accompanied a new purchaser of its Quicken personal finance software to their home, where they watched the customer unpack, install and start to use Quicken. This enabled them to understand the userfriendliness of the product from the customer’s perspective and, in particular, whether it was meeting the company’s objective that customers should be able to use it within 30 minutes of opening the box. However, the real ongoing satisfaction enhancer for Intuit in the long run was technical support, where the department’s goal was not just to efficiently answer customers’ queries, but to “create apostles”. In fact, their goal was to treat customers so well that their experience prompted them to tell five friends about Quicken. Unlike most competitors, Intuit recruited highly intelligent representatives for its call centre, paid them well and trained them to answer customers’ queries on wide ranging personal finance issues, not just the technical aspects of the software. Even though Quicken retailed for only around $20 at the time, buyers were entitled to this high quality customer support for free and for life. Whilst many companies would have regarded this policy as financial suicide, Cook realised that the customer lifetime value benefits from retention, related sales and referrals would far exceed the cost of the service20. This type of customer requirement is therefore known as a satisfaction enhancer. Unlike Kano’s ‘surprise and delight’ factors, poor performance on enhancers will result in dissatisfaction since good service, attitude and other soft skills are expected. However, unlike maintainers, continual striving for excellence in these areas (being great not just good) will continue to have a positive impact on customer satisfaction before eventually hitting diminishing returns. This pattern of events would result in the type of s-shaped curve shown in Figure 14.5. FIGURE 14.5 Satisfaction enhancer Customer satisfaction
232
5/7/07
Friendliness of serving staff
Chapter fourteen
5/7/07
10:01
Page 233
Advanced analysis
14.3 Identifying enhancers and maintainers As we know from Chapters 4 and 10, the strength of the relationship between two variables, such as ‘cleanliness of the toilets’ and customer satisfaction, can be identified through statistical techniques such as correlation or multiple regression. However, these methods will not identify the extent of any non-linearity in the relationship, so other approaches are needed to pursue concepts such as enhancers and maintainers. This section explains three methods for identifying asymmetry in customer satisfaction data and understanding its implications. KEY POINT Statistical techniques such as correlation and multiple regression demonstrate the strength of a relationship between two variables but not the linearity of the relationship. 14.3.1 Intuitive judgements At a simple level, enhancers and maintainers can be estimated through experienced judgement. Givens such as ‘cleanliness of the toilets’ and ‘on-time delivery’ tend to be maintainers, whilst ‘friendliness of staff ’ and ‘rewarding loyalty’ are more likely to be enhancers. A decade before Kano wrote about ‘attractive quality’, Theodore Levitt21 had already introduced the idea of differentiating the product, stating that: “The new competition is not between what companies produce in their factories but between what they add to their factory output in the form of packaging, services, advertising, customer advice, financing, delivery arrangements, warehousing and other things that people value.” Levitt’s ‘total product’ concept22 is illustrated in Figure 14.6. If the FIGURE 14.6 The total product Potential product Augmented product Expected product Generic product
233
Chapter fourteen
10:01
Page 234
Advanced analysis
generic product is a hotel room, the expected product is equivalent to maintainers such as clean sheets and bathroom and the augmented product covers enhancers such as helpful staff and speedy room service. Levitt’s potential product is similar to the ‘surprise and delight’ factors and, like Kano, this is where he felt manufacturers needed to focus their competitive strategies. At this level it is a simple intuitive task to identify maintainers and enhancers by dividing customers’ requirements into expected and augmented benefits. However, since gut-feel analysis is clearly a poor basis for management decision making, a more scientific method is required. 14.3.2 Internal metrics An accurate, but time-consuming method is to track the relationship between performance and customer satisfaction on specific requirements, as illustrated in Figure 14.7, which shows internal metrics for average daily response time over 24 months on the left hand y axis and monthly customer satisfaction scores for response time on the right hand axis. FIGURE 14.7 Internal and external measures 8
10
7
9 8
6 Response time Satisfaction
5
7 6
4
5
3
4 3
2
2 1
Customer satisfaction score
Average response time in days
234
5/7/07
1
0 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Months
0
The S-shaped customer satisfaction curve shows that, for the organisation concerned, improving response time from an unacceptable level of five days or longer generates strong gains in customer satisfaction down to an average response time of two days. Beyond this point, the return from further improvements in response time rapidly diminishes. Any company presented with figures like this would have to conclude that the cost of lowering response times below two days would be better invested in other customer benefits. Based on the research of Finkelman23, Myers24
Chapter fourteen
5/7/07
10:01
Page 235
Advanced analysis
calls these points â&#x20AC;&#x2DC;breakpointsâ&#x20AC;&#x2122;, and illustrates the concept with a fast food restaurant which found that waiting up to five minutes made no difference to customer satisfaction but that longer waiting time progressively reduced satisfaction. Five minutes was therefore the speed of service breakpoint that must not be exceeded. KEY POINT Where organisations possess accurate records of service levels, they can be compared over time with customer satisfaction data to identify maintainers and enhancers. 14.3.3 Survey data Whilst hard data of the type shown in Figure 14.7 is ideal for making and justifying customer management decisions, such objective internal metrics will often not be available for many aspects of the customer experience. For the softer skills such as staff behaviours, truly objective internal metrics will never be available, so survey data will have to be used for identifying enhancers and maintainers. This is best done by producing the type of chart shown in Figures 14.8 to 14.10. In all three charts, the left hand y axis shows customer satisfaction with the organisation overall (based on an overall satisfaction question), the x axis shows customer satisfaction with the requirement concerned and the line plots the relationship between those two sets of scores. The right hand y axis and the grey bars show the number of respondents that gave each score for the requirement. Although the outcome variable shown in these three charts is customer satisfaction, it should be whatever outcome is considered most appropriate. Figures 14.11 and 14.12, for example, will show customer loyalty as the outcome variable. FIGURE 14.8 Linear example Quality of product
600
10 9
400
7 348 344 6
300 5 204
4 122
3 2 1
200
26
41
36
1
2
3
47
100
57
4 5 6 7 8 Satisfaction with quality
9
10
0
Number of respondents
Customer satisfaction overall
478 500 8
235
Chapter fourteen
10:01
Page 236
Advanced analysis
FIGURE 14.9 Enhancer example Friendliness of the customer service advisor 10
1600
9
1400
8
1200
7
1000
6 800 5 600
522
4 348
3
400
205 2 1
40
51
1
2
44
66
Number of respondents
Customer satisfaction overall
1436
200
95 111
3 4 5 6 7 8 Satisfaction with friendliness
9
10
0
Broadly speaking, the relationships depicted in the three charts are linear (Figure 14.8), a satisfaction enhancer (Figure 14.9) and a maintainer (Figure 14.10). However, the main point is that when using real world data, customer requirements simply do not cluster neatly into the type of ‘satisfaction maintaining’ and ‘delight creating’ attributes discussed so far in this chapter. FIGURE 14.10 Maintainer example Clarity of billing 10
600
562
9
468
500
8 400
7 6
285
300
253
5
200
4 3 2 1
97 23
39
1
2
44
101
62
3 4 5 6 7 8 9 Satisfaction with clarity of billing
10
100 0
Number of respondents
Customer satisfaction overall
236
5/7/07
Chapter fourteen
5/7/07
10:01
Page 237
Advanced analysis
KEY POINT In the real world data for the antecedents of customer satisfaction tend to be broadly linear; similar to Kano’s ‘more is better’ factors. Satisfaction maintainers and enhancers are appealing theoretical concepts that rarely exist in the real world. Based on an analysis of hundreds of satisfaction surveys conducted by The Leadership Factor and hundreds of thousands of customer responses, the less classifiable pattern of data shown in Figure 14.11 or the broadly linear examples shown in Figures 14.8 and 14.12 would be much more typical than the maintainer and enhancer examples in Figures 14.9 and 14.10. There are several reasons for this. Firstly, as we have stated earlier in this book, customer satisfaction data tends to be positively skewed. The relatively normal distribution shown in Figure 14.10 (although still somewhat positively skewed) is not typical because it reflects an organisation with quite low customer satisfaction. In today’s competitive markets, the data distributions in Figures 14.8, 9, 11 and 12 are much more typical of the level of customer satisfaction typically achieved. As well as making the relationship curve more volatile at lower levels of satisfaction, where there are relatively few respondents even with large samples, it reduces the likelihood of seeing the classic satisfaction maintainer effect because few of the companies concerned are performing badly enough. Much more relevant however, is the fact that the real world simply does not often conform to theoretical constructs based on asymmetry, especially regarding the relationship between customer satisfaction and its antecedents. In this respect our conclusions are consistent with those drawn by Michigan University’s Johnson and Gustafsson, who state: “We have found that concern over non-linearities when analyzing quality and FIGURE 14.11 Unclassifiable data pattern Décor of the restaurant 10
1400 1119 1200
8 1000
7
800
6 5
600 446
4
2 1
400
297
3 106 1
27
41
55
2
3
4
129 104 5
6
187
7
200
8
Customer satisfaction with décor
9
10
0
Number of respondents
Customer Satisfaction overall
9
237
Chapter fourteen
10:01
Page 238
Advanced analysis
satisfaction data is often unwarranted, especially when it comes to attributes and benefits. Although non-linear relationships certainly exist, they tend to be observed more over time……or across market segments. For any given market segment at one point in time, a linear relationship is usually all that is called for.”5 See section 14.3.5 for further examination of this point. Rather than attempting to force customer requirements into categories such as enhancers and maintainers, it will usually be more informative to draw specific conclusions for each attribute based on the gradient of the curve and the distribution of satisfaction scores, as explained in the next two sections. 14.3.4 Return on investment Many companies will see a more or less linear relationship between customers’ requirements and overall satisfaction or loyalty, but some of the lines will be steeper than others. Figure 14.12 uses data from the same company as Figure 14.11. Based on the steepness of the curve, the restaurant is likely to see a much better return on investment from efforts to improve the welcome on arrival than from re-decorating. The outcome variable in Figures 14.11 and 14.12 is loyalty (based on propensity to return and to recommend), rather than satisfaction, but it is clear that the décor makes relatively little difference to most customers’ loyalty – those least liking the décor score an average of 6.2 across the two loyalty questions, whereas customers who most like the décor score only two points higher for loyalty on average. By contrast there is a range of five points across loyalty scores given by customers that were most satisfied and least satisfied with the welcome on arrival. FIGURE 14.12 Good return on investment The welcome on arrival 10
1400 1197
9
1200
8 1000
7
800
6 589
5 488 4 324
600 400
3 189 178 2 1
58 1
27
57
200
69
2 3 4 5 6 7 8 9 10 Customer satisfaction with the welcome
0
Number of respondents
Customer loyalty
238
5/7/07
Chapter fourteen
5/7/07
10:01
Page 239
Advanced analysis
KEY POINT When drawing conclusions on how to improve customer satisfaction, organisations should focus on the steepness of the curve. 14.3.5 Customer segments As stated by Johnson and Gustafsson5, asymmetry at attribute level is more likely to be seen over time (as in Figure 14.7) or across segments. The latter effect may be present when the slope of the relationship varies at different points along the curve as shown in Figure 14.13. Clearly, this financial services company has very well presented employees in the main, but it could obviously benefit from understanding and addressing the concerns of the 10% of customers that score below 7 for satisfaction with staff appearance. It may be that they are more likely to visit specific branches, where management is less focused on staff appearance. It may be that a certain customer segment, such as older, more affluent customers (who may be very important to the company), are more critical of staff appearance. The key point is that by focusing satisfaction improvement efforts on the steepest part of the curve, a superior return on investment will be achieved. FIGURE 14.13 Segment differences Appearance of employees 10
200
188 176
9
180 160 140
7
120
6
101
5
75
80
4
60 38
3 2 1
100
1
2
2
6
11
3
4
5
Number of respondents
Recommendation
8
40 20
6
7
8
9
10
0
Satisfaction with staff appearance
14.4 Customer delight: myth or reality? As we have seen earlier in this chapter, some customer requirements are said to be ‘delight creating’13 or ‘surprise and delight factors’3,4. Many commentators have claimed that in today’s competitive markets customer delight rather than ‘mere
239
Chapter fourteen
240
5/7/07
10:01
Page 240
Advanced analysis
satisfaction’ is necessary25,26,27. Recent research, however, has cast doubt on the feasibility of delighting the customer. As reported earlier in this chapter, Oliver, Rust and Varki17 supported the positive impact of delight on customers’ future attitudes and behaviour, and found evidence of delight in a study of theme park customers. In the same article, however, the authors found less evidence of delight amongst classical music concert-goers and questioned whether delighting the customer was feasible for less exciting products and services. To investigate this hypothesis further, Finn surveyed website users and found little evidence of surprise or delight and a much stronger relationship between satisfaction and intention to revisit than between delight and intention28. It may be that for most organisations in the 21st century, the concept of customer delight is fundamentally flawed. Kano4 and Levitt22 recognised that today’s delighters are tomorrow’s givens. In the real world, it is virtually impossible and certainly not cost-effective to continually surprise customers by providing something they didn’t expect. Kano’s pure theory is more applicable to the product development process, where setting an objective to introduce a completely new feature that customers had not demanded (or even a totally new product concept such as the original Sony Walkman) may be feasible due to the long timescale and high level of investment involved. However, most organisations today are in service industries where surprising customers is not a practical goal. Organisations must also distinguish between the feasibility of surprising one individual customer (like the colouring book example in Chapter 1) and achieving that effect with enough customers to make any significant difference to the company’s financial performance. KEY POINT Delighting an individual customer may be feasible but achieving surprise and delight with enough customers to significantly affect the financial performance of the business is not practical, especially in service industries. In fact, due to the service intensive nature of many companies’ operations, keeping performance on satisfaction maintainers at acceptable levels will often present a considerable challenge without getting distracted by schemes to wow individual customers. Although delivering a continual stream of unexpected benefits is not a realistic strategy, generating very high levels of customer satisfaction by consistently meeting or exceeding customers’ conscious requirements is feasible, although far from easy. However, even the widely accepted goal of exceeding customers’ requirements where possible has been challenged. Schneider and White29 comment on the widespread assumption in service quality literature going back to the SERVQUAL model that meeting customers’ expectations is good but exceeding them is even better30. They question the prevailing view in the service quality field that ‘more is always better’31,
Chapter fourteen
5/7/07
10:01
Page 241
Advanced analysis
suggesting that some service requirements can be ‘ideal point attributes’29, where performance beyond the ‘ideal’ level will be detrimental to customer satisfaction. This should not be confused with satisfaction maintainers, where performance beyond the adequate or expected level is pointless since it will deliver little if any additional benefit in customers’ eyes, but would not reduce customer satisfaction. By contrast, exceeding customers’ expectations on ideal point attributes would actually have a negative impact on the total customer experience. To illustrate this concept Schneider and White refer to earlier research in the convenience store market32 where excessive friendliness and personal attention was found to conflict with the more important requirements of efficiency and speed of service. In the busy convenience store environment, more smiles were not better beyond the ‘ideal point’. When we consider the ‘ideal point’ concept in the context of everything we have said about customer satisfaction so far in this book, it holds few surprises. Customers base their satisfaction judgements on their feelings about the total customer experience. If two or more requirements are somewhat contradictory, suppliers have to make choices about their performance and the basis of their decision should always be the requirements’ relative importance to the customer. As we have said many times, to succeed at customer satisfaction, organisations have to ‘do best what matters most to customers’. Since many organisations fail to meet customers’ requirements even on the basics, it is achieving consistently high levels of customer satisfaction, rather than exceeding expectations or surprising and delighting the customer that will achieve the greatest return for most companies. This argument is succinctly summarised by Barwise and Meehan33 in their book “Simply Better: Winning and keeping customers by delivering what matters most”: “We believe that your first priority should be to improve performance on the things managers often dismiss as ‘table stakes’, ‘hygiene factors’ or ‘order qualifiers’ (as opposed to ‘order winners’)………..companies assume that they need to offer something unique to attract business. Secondly, they assume that years of competition have turned the underlying product or service into a commodity. In reality, what customers care most about is that companies reliably deliver the generic category benefits, but, far too often, that does not happen. Therefore, most businesses have a big opportunity to beat the competition, not by doing anything radical and certainly not by obsessing about trivial unique features or benefits, but instead by getting closer to their customers, understanding what matters most to them, and providing it simply better than the competition.”33 KEY POINT The vast majority of organisations will most effectively improve their business performance by focusing on elimination of negative customer experiences rather than aiming to exceed customers’ expectations.
241
Chapter fourteen
10:01
Page 242
Advanced analysis
Starbucks discovered this fact when its customer satisfaction levels fell, despite ‘delighting’ customers with a succession of new and highly innovative coffee products34. The company had always placed considerable emphasis on new product development and it conducted extensive research into customers’ tastes and attitudes towards new products. Customers did like the new beverages and demand for them was strong, but Starbucks’ focus on innovation had resulted in the company taking its eye of a much less exciting but fundamental element of the customer value proposition – speed of service. The new drinks were often complicated and labour intensive to prepare, increasing the time staff took to serve customers. Falling customer satisfaction was very worrying since the company knew there was a strong relationship between satisfaction and sales. It therefore extended its customer satisfaction research, including measures of importance as well as satisfaction. This showed that fast, convenient service was far more important to customers than new, innovative drinks. Starbucks therefore spent $40 million increasing staff levels as well as improving processes for taking orders and preparing drinks. This increased the percentage of customers being served in under three minutes from 54% to 85%, resulting in a big increase in customer satisfaction.
14.5 The consequences of customer satisfaction Although they question the role of asymmetric data in causing customer satisfaction or dissatisfaction, Johnson and Gustafsson are much more convinced about the asymmetric relationship between satisfaction and loyalty5. To establish the links FIGURE 14.14 Harvard’s asymmetric satisfaction-loyalty relationship
Apostle
100% Zone of affection 80%
Loyalty
242
5/7/07
Zone of indifference 60%
40% Zone of defection 20%
Saboteur
Satisfaction 1
5
10
Chapter fourteen
5/7/07
10:01
Page 243
Advanced analysis
between customer satisfaction and its consequences, the survey questionnaire must contain questions about those outcomes - typically one or more relevant loyalty questions selected from those described in Chapter 9. As with the antecedents of customer satisfaction, the overall strength of a relationship can be established using statistical techniques, but this will not account for the effects of non-linearity. We have already seen in Chapter 2 the classic non-linear curve produced by Harvard Business School to illustrate the asymmetric relationship between customer satisfaction and loyalty, repeated here as Figure 14.1420. There is wide agreement that this relationship is often non-linear5,8,9,12,13. There is less agreement, however, on the precise nature of this asymmetric relationship. Whilst agreeing with Harvardâ&#x20AC;&#x2122;s principle that customer loyalty is achieved only at the highest levels of satisfaction, Keiningham and Vavra13 illustrate the relationship differently, as shown in Figure 14.15. In practice, as we pointed out for the antecedents of customer satisfaction in Section 14.3.3, there is no such thing as a standard curve that will accurately reflect the relationship between customer satisfaction and loyalty for all companies. This is confirmed by Jones and Sasser10 who illustrate how the relationship typically differs
Customer loyalty
FIGURE 14.15 Zones of pain, mere satisfaction and delight Zone of pain
Zone of mere satisfaction
Satisfiers
Zone of delight
Delighters
Customer satisfaction
across five markets. (Figure 14.16). Each organisation must therefore identify its own curve, since the nature of its satisfaction â&#x20AC;&#x201C; loyalty relationship will have profound implications for its entire customer management strategy. KEY POINT There is no standard curve or formula that accurately depicts the relationship between customer satisfaction and loyalty.
243
Chapter fourteen
10:01
Page 244
Advanced analysis
FIGURE 14.16 No standard satisfaction - loyalty relationship high
Local telephone Airlines
Loyalty
Personal computers Hospitals
Automobiles low
high
Satisfaction
Figures 14.17 and 14.18 illustrate the relationship for two different organisations. In both cases, the customer satisfaction index is plotted on the x axis and the loyalty index on the left hand y axis. The right hand axis shows the number of survey respondents at each level of satisfaction. The small caption states the overall customer satisfaction index for each company. FIGURE 14.17 Satisfaction-loyalty relationship 1 100%
200 ZONE OF OPPORTUNITY
90%
160 140
124 113
120
60% 100 50% 63
40%
60
30% 20%
75 80
28
33
Number of respondents
70%
The overall customer satisfaction index is 74.8%
180
149
80%
Loyalty
244
5/7/07
40 20
10
10%
0 25%
35%
45%
55%
65%
75%
85%
95%
Customer satisfaction index
For Company 1, the slope of the curve is very steep for all levels of satisfaction down to 55%, after which it levels off as there is little loyalty left to lose below this point. The steep part of the curve covers most of the satisfaction range and, more importantly, most of the customers â&#x20AC;&#x201C; 88% of the respondents in this survey. The
Chapter fourteen
5/7/07
10:01
Page 245
Advanced analysis
steep gradient of the curve shows that with 88% of its customers, this organisation faces both an opportunity and a threat. If it could increase satisfaction levels across that range it would gain a large increase in customer loyalty. Conversely, if satisfaction falls it risks losing many customers. The overall index and the shape of the histogram tell us that the satisfaction level of many customers (56% in fact), falls on the steepest part of the curve. To achieve the maximum gain in customer loyalty, all organisations should focus on their ‘zone of opportunity’, which is a combination of where the curve is steepest and where there are most customers. For Company 1, the zone of opportunity is clearly between 55% and 75% satisfaction. Company 1 should therefore base its PFIs on addressing the satisfaction gaps of customers in that zone. At these poor levels of satisfaction it will typically be improving its performance on the basics, or satisfaction maintainers, that will be necessary. FIGURE 14.18 Satisfaction-loyalty relationship 2 100%
98
100
100
ZONE OF OPPORTUNITY
90%
90
80%
77
80 70
Loyalty
60%
53
59
60
50%
50 39
40%
39
40
30%
30
20% 10%
Number of respondents
70%
The overall customer satisfaction index is 78.7%
20
14
11
5
0%
10 0
55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Customer satisfaction index
The second company is in a very different position. The steep part of the curve is from a satisfaction index of 70% downwards, and only 23% of its customers are in this zone. Above that level, not only is loyalty fairly constant, but it is at a very high level of around 90% and above. This may be because there are high switching barriers, so it may not be genuine loyalty, but at this point in time it forms an accurate illustration of the satisfaction – loyalty relationship for this company. In this situation there is clearly little short term benefit in attempting to delight the customer. Moving those scoring 75% to 85%, or 85% to 95% would produce little or no benefit in terms of loyalty. By contrast, working on the issues responsible for the low satisfaction of those in the 55% to 70% satisfaction zone could significantly reduce customer decay. Even though it covers only a minority of its customers, the steepness of the curve dictates that the zone of opportunity for Company 2 is customers with an index between 55% and 70%.
245
Chapter fourteen
246
5/7/07
10:01
Page 246
Advanced analysis
KEY POINT To achieve the maximum gain in customer loyalty, companies should focus on their ‘zone of opportunity’, which is a combination of where the curve is steepest and where there are most customers. It is also possible to use the charts to compare the two companies’ need to invest in satisfaction improvement. Company 1 clearly has more to gain from investing in customer satisfaction and more to lose from not doing so. The steepness of the curve suggests that it is operating in a market with few switching barriers and most of its customers are in the zone of opportunity where there is a strong relationship with loyalty. Even quite modest gains in customer satisfaction should show a significant economic return since, at the steepest point in the curve each 1% improvement in satisfaction leads to a loyalty increase of almost 2%, and we know from Chapter 2 that even small gains in loyalty can be very profitable. By contrast, company 2 is in a much safer position. Its overall index is some way above the steep part of the curve and most customers are on the flat part of the curve where changes in customer satisfaction are not associated with higher or lower customer loyalty. We can call this the ‘zone of stability’. However, Company 2’s curve does display the ‘cliff edge’ phenomenon, where falling satisfaction reaches a point, just above 70% in this example, where loyalty is suddenly and strongly affected. Company 2 would therefore be advised to monitor the situation to ensure that its overall index, and the bulk of its customers, remain above the cliff edge danger point. In the short term it could work on addressing the concerns of the customers just below the cliff edge to reduce the customer decay that is occurring. Compared with Company 1, Company 2 has a much smaller percentage of its customers in the zone of opportunity and would therefore expect a lower financial return on its investment. KEY POINT Companies in the ‘zone of opportunity’ have a much stronger business case for investing in satisfaction improvement than those in the ‘zone of stability’. We can see therefore that improving customer satisfaction will be more profitable for some companies than others, but that almost all organisations will maximise their returns by focusing their improvement efforts and investment where they will produce the greatest return. Ways of improving the effectiveness of actions taken to address PFIs is the focus of the next chapter.
Conclusions 1. If customer satisfaction relationships were linear, a given change in one variable would always result in the same degree of change in its corresponding outcome variable – for example, a 1% increase in customer satisfaction producing a 1% increase in loyalty whatever the level of satisfaction.
Chapter fourteen
5/7/07
10:01
Page 247
Advanced analysis
2. In the real world the relationship is often much less symmetrical – the impact changes in customer satisfaction make on loyalty varying at different levels of satisfaction. 3. As well as the consequences of satisfaction (such as loyalty), the relationship between customer satisfaction and its antecedents can also be asymmetric, and this may affect decisions on PFIs. 4. For example, some customer requirements have been labelled ‘satisfaction maintainers’. These are typically essential requirements, so poor performance by the supplier makes a very large negative impact on customer satisfaction but performance above a good level makes little difference. 5. By contrast, ‘satisfaction enhancers’, although often not amongst customers’ most important requirements, can make a very strong positive difference to customer satisfaction at very high levels of performance. This forms the basis of concepts such as attractive quality, delighters and wow factors. 6. Enhancers and maintainers can be identified by tracking internal metrics against customer satisfaction but this would have to be done over a lengthy period. More practical therefore is to use survey data, plotting each requirement against overall satisfaction or loyalty. 7. In the real world, customers’ requirements rarely conform obediently with consultants’ favourite theories and often do show a fairly linear relationship with overall satisfaction. The gradient of the curve will therefore indicate the requirements most capable of improving customer satisfaction or loyalty. 8. Since surprise is an integral element of delight, it is not feasible for most organisations in today’s service intensive markets to pursue a strategy of delighting the customer. Performing consistently well on customers’ expected requirements, especially their most important ones, should be their objective. 9. There is widespread agreement that the relationship between satisfaction and its consequences, such as loyalty, is asymmetric, but there is no single universally applicable satisfaction – loyalty curve. 10. To achieve the best return on investment from improving customer satisfaction, companies should focus on the steepest part of their satisfaction – loyalty relationship curve.
References 1. Anderson and Sullivan (1993) "The Antecedents and Consequences of Customer Satisfaction for Firms”, Marketing Science 12, (Spring) 2. Kano, Nobuhiku, Fumio and Shin-ichi (1984) "Attractive quality and must be quality", Quality Vol 14 No2 3. Kano, Seraku, Takahashi and Tsuji (1996) "Attractive quality and must-be quality”, in Hromi John D ed, "The best on quality”, ASQC Quality Press, Volume 7 of the Book Series of the International Academy for Quality, Milwaukee 4. (1993) Special issue on Kano's methods for understanding customer-defined
247
Chapter fourteen
248
5/7/07
10:01
Page 248
Advanced analysis
quality, Center of Quality Management, Journal Vol 2 No 4 (Fall) 5. Johnson and Gustafsson (2000) "Improving Customer Satisfaction, Loyalty and Profit: An Integrated Measurement and Management System”, Jossey-Bass, San Francisco, California 6. Johnson, Michael D (1998) For an analysis of the Kano model from the customer satisfaction perspective see "Customer Orientation and Market Action”, PrenticeHall, Upper Saddle River, New Jersey 7. Schneider and Bowen (1999) "Understanding Customer Delight and Outrage”, Sloan Management, Review 41, (Fall) 8. Fullerton and Taylor (2002) "Mediating Interactive and Non-linear Effects in Service Quality and Satisfaction with Services Research” Canadian Journal of Administration Sciences 19, (June) 9. Mittal and Kamakura (2001) "Satisfaction, Repurchase Intent and Repurchase Behavior: Investigating the Moderating Effect of Customer Characteristics”, Journal of Marketing Research 38, (February) 10. Jones and Sasser (1995) "Why Satisfied Customers Defect”, Harvard Business Review 73, (November-December) 11. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the consumer”, McGraw-Hill, New York 12. Anderson and Mittal (2000) "Strengthening the Satisfaction-Profit Chain”, Journal of Service Research, Vol 3 No 2 13. Keiningham and Vavra (2003) "The Customer Delight Principle”, McGraw-Hill, Chicago 14. Westbrook, Robert A (1987) "Product/Consumption Based Affective Responses and Postpurchase Processes”, Journal of Marketing Research 24, (August) 15. Holbrook and Hirschman (1982) "The Experiential Aspects of Consumption: Consumer Fantasies, Feelings and Fun”, Journal of Consumer Research 9, (September) 16. Plutchik, Robert (1980) "Emotions: A Psychoevolutionary Synthesis”, Harper and Row, New York 17. Oliver, Rust and Varki (1997) "Customer Delight: Foundations, Findings and Managerial Insight”, Journal of Retailing 73, (Fall) 18. Hill and Alexander (2006) "The Handbook of Customer Satisfaction and Loyalty Measurement” 3rd Edition, Gower, Aldershot 19. Taylor and Schroeder (2003) "Inside Intuit”, Harvard Business School Press, Boston 20. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 21. Levitt, Theodore (1969) "The Marketing Mode”, Mc-Graw-Hill, New York 22. Levitt, Theodore (1980) "Marketing Success through Differentiation - of Anything”, Harvard Business Review 58, (January – February) 23. Finkelman, Daniel (1993) "Crossing the zone of indifference”, Marketing
Chapter fourteen
5/7/07
10:01
Page 249
Advanced analysis
Management 2(3) 24. Myers, James H (1999) "Measuring Customer Satisfaction: Hot buttons and other measurement issues”, American Marketing Association, Chicago, Illinois 25. Daffy, Chris (2001) "Once a Customer Always a Customer”, Oak Tree Press, Dublin 26. Shaw and Ivens (2002) "Building Great Customer Experiences", Palgrave Macmillan, Basingstoke 27. Keiningham, Vavra, Aksoy and Wallard (2005) "Loyalty Myths”, John Wiley and Sons, Hoboken, New Jersey 28. Finn, Adam (2005) "Reassessing the Foundations of Customer Delight”, Journal of Service Research 8 (2) 29. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 30. Parasuraman, Berry and Zeithaml (1985) "A conceptual model of service quality and its implications for future research”, Journal of Marketing 49(4) 31. Brown, Churchill and Peter (1993) "Improving the measurement of service quality”, Journal of Retailing 69(1) 32. Sutton and Rafaeli (1988) "Untangling the relationship between displayed emotions and organizational sales: The case of convenience stores”, Academy of Management Journal 31(3) 33. Barwise and Meehan (2004) "Simply Better: Winning and keeping customers by delivering what matters most”, Harvard Business School Press, Boston 34. McGovern, Court, Quelch and Crawford (2004) “Bringing Customers into the Boardroom”, Harvard Business Review, November
249
Chapter fifteen
250
5/7/07
10:02
Page 250
Using surveys to drive improvement
CHAPTER FIFTEEN
Using surveys to drive improvement Improving customer satisfaction is very difficult. Whilst customers are often quick to form negative attitudes if they receive poor service, they tend to be much slower to revise their opinions in a positive direction when a supplier improves, possibly because customers expect good service so take it for granted when it happens. Added to the difficulty of shifting customers’ attitudes, organisations often display considerable inertia when it comes to making changes or improvements in processes, staff behaviours, policies or many other engrained practices that may need challenging to improve customer satisfaction. This is why it is essential that the CSM methodology contributes to rather than detracts from the organisation’s ability to make improvements, and there are four main facets to this. First, it must be accurate, providing a measure that truly reflects how satisfied or dissatisfied customers actually feel. As we explained in Chapters 4 and 5, basing the survey on ‘the lens of the customer’ is the key methodological requirement in this respect. Second, it must be a tough measure, based on making customers more satisfied rather than making more customers satisfied, since a score that looks good will play right into the hands of any voices in the organisation that oppose change, investment or additional activity to improve customer satisfaction. This was illustrated in Chapter 3. Third, it must be sensitive enough to detect the small changes in customer satisfaction that typically occur. As we pointed out in Chapter 8, use of a 10-point numerical scale is the essential requirement here. Although all the first three points are essential foundations of a sound CSM methodology, the most important aspect for improving customer satisfaction is the fourth – the actionability of the outcomes generated by the survey. Since this is the biggest weakness of many customer satisfaction surveys, we will devote this chapter to addressing it.
At a glance In this chapter we will: a) Examine the disadvantages of too much data. b) Review the type of survey information that should be reported. c) Explore the differing reporting requirements of annual surveys compared with continuous tracking.
Chapter fifteen
5/7/07
10:02
Page 251
Using surveys to drive improvement
d) Explain how to justify the survey’s conclusions and recommendations for PFIs. e) Consider the potential conflict between actionable outcomes and an accurate survey based on ‘the lens of the customer’. f) Explain how to use Customer Experience Modelling (CEM) to overcome this conflict. g) Show how CEM can help organisations to monitor improvement. h) Outline a simple method for demonstrating that customer satisfaction pays.
15.1 What are not actionable outcomes? Judging by much of the CSM survey output that we have seen, many organisations, and even their research agencies, don’t know the answer to this question. It is easy to give examples of common outcomes that are not useable for improving customer satisfaction. 15.1.1 Too much data Worst of all is too much data, typically in the form of what researchers call ‘cross-tabs’, that split the results by every segment known to man. We have often seen table after table of cross-tabs filling a report the size of a telephone directory. Migrating from paper to a more up-to-date method of delivery, such as an interactive web reporting site, greatly improves the speed of finding specific pieces of information but does not address the problem of actionability of outcomes. This is because, rather like ‘ideal point attributes’, more is not better. Whilst we are in favour of drilling down into the data to identify any useful and statistically significant differences between customer types, customer attitudes (e.g. customers with different requirements or differing levels of satisfaction) or business units, most cross-tabs simply don’t show them. Therefore, whilst drilling down is a painstaking but potentially useful task that should be carried out by someone in the organisation’s research department (or its research agency), only the conclusions from the very small proportion of cross-tabs that will affect the action taken by the organisation to improve customer satisfaction should be presented to managers or employees. Wasting time studying and debating information that may be interesting but will make no difference to any action taken to improve customer satisfaction, is one of the main reasons for the failure of many organisations’ CSM processes. KEY POINT It is the quality not the quantity of CSM data that will lead to improvement in customer satisfaction 15.1.2 ‘So what?’ conclusions The purpose of CSM is not to produce an interesting sociological study of customers’ attitudes but to improve customer satisfaction, and hence customer loyalty and the business performance of the organisation. One could report, for example, that 55%
251
Chapter fifteen
252
5/7/07
10:02
Page 252
Using surveys to drive improvement
of customers are satisfied with speed of service but 45% are dissatisfied with it, that over 50s are more satisfied than under 25s, or that 60% of customers are willing to recommend the organisation compared with 28% who are not (giving a net promoter score of 32%), but none of it would have any value. However interesting it might be, it adds no value to the organisation because it doesn’t tell busy managers what to do to improve customer satisfaction. To judge the value of CSM survey conclusions, apply the ‘so what?’ test. What difference would knowing that information make to any decisions made or action taken by anyone in the organisation?
15.2 What are actionable outcomes? 15.2.1 Concise information To produce change, CSM survey outcomes will have to engage the attention of senior management and operational middle management. To accomplish this difficult task, they must be presented with all the information they need (and none they don’t need) in a clear and concise form. Whilst norms about exactly how much information should be provided to management will vary between organisations, it is possible to generalise about how much information is typically needed to convey the essentials of a customer satisfaction survey, and in this respect there will be differences between annual surveys and more frequent continuous tracking. Annual surveys will need more background information in an executive summary since the basics of the CSM methodology may not be remembered by all managers from one year to the next. It can therefore be useful to remind people that the results provide an accurate reflection of customer satisfaction because the questions were based on the ‘lens of the customer’, identified through exploratory research. A reminder of the bare essentials of the data collection will also be helpful, including dates of fieldwork, method of data collection, representativeness of the sample and, especially for self-completion, the response rate. Next come brief details of the results – the headline measure of customer satisfaction and how good, or poor this is. If the survey has been conducted previously, comparing against the company’s own previous score provides the best performance yardstick, but benchmarking against other organisations is also useful (especially to stimulate action if satisfaction is poor), and is essential for first time surveys to provide context. Finally, and most importantly are the PFIs – the actions that the organisation must take to improve customer satisfaction. For first time surveys it can also be helpful to add brief details of any other useful customer satisfaction improvement initiatives such as the internal and external feedback covered in the next two chapters. Even for updates, if they are a year or more apart, a brief reminder of the value of feedback would be advisable. In very concise form, this information can be squeezed onto one sheet of paper, but two pages are more realistic. Ideally, the information would be presented to management, giving them the opportunity to ask questions.
Chapter fifteen
5/7/07
10:02
Page 253
Using surveys to drive improvement
Executive summaries for continuous tracking surveys can be briefer since managers soon become familiar with the basics of the methodology. Consequently there are only two fundamental pieces of information that managers need to know on a regular basis. First, is the organisation succeeding in improving customer satisfaction, and second, what action should now be taken to improve it further (or reverse any decline)? This information can easily be provided on less than one page, although two aspects of continuous tracking should also be considered. Firstly, since monthly changes in customer satisfaction will typically be small they may not show through in the results and the headline measure may move up and down a little from one month to the next. To judge the organisation’s performance, managers must therefore be given enough information to understand the trend. Ideally this should provide a short and medium term perspective. For example, ‘At 84.3% the customer satisfaction index is now 3.1% above its baseline 2 years ago and has risen for 4 of the last 6 months.’ This type of trending enables the organisation to avoid getting bogged down in month-by-month technical details such as confidence intervals, which almost always detract from, rather than add to, the company’s propensity and ability to take action. A second characteristic of monthly tracking that affects reporting to management is the fact that due to the slowness of improving customer satisfaction, the PFIs may not change for quite a few months at a time, and this can give some managers the impression that no progress is being made. This can be mitigated by presenting suitable trend data for individual PFIs as well as for overall satisfaction, but periodically it will still be helpful to remind managers that improving customer satisfaction is a long haul rather than a quick fix and that due to the nature of customer satisfaction it is not abnormal for some requirements to be almost perpetual PFIs. This will apply when a requirement is extremely important to customers, is a ‘more is always better’ attribute and will be exacerbated by the tendency for customers’ expectations to increase, resulting in the organisation having to improve performance just to prevent customer satisfaction from declining. Organisations that are prepared to continually invest in new and better ways of ‘doing best what matters most to customers’ on a restricted range of requirements that are always critically important to customers will achieve much higher levels of customer satisfaction, loyalty and profit than those always looking for new quick wins. KEY POINT It is not unusual for some highly important customer requirements to be long term PFIs. The most successful organisations recognise this and accept the long haul of continually investing in new ways to do best what matters most to customers. 15.2.2 Clear authoritative conclusions To have credibility with managers in most organisations, conclusions and
253
Chapter fifteen
254
5/7/07
10:02
Page 254
Using surveys to drive improvement
recommendations need to be clear-cut and definitive. As we suggested in Section 15.1.2, trying to present all sides of an argument or providing a level of detail that leads to lengthy debates about the real meaning and messages in the information is always counter-productive for improving customer satisfaction. CSM should not follow the reporting conventions of most market and social research and all academic research, where all sides of the argument are typically reported even if it leads to no conclusions whatsoever. Customer satisfaction surveys are different. They have only one purpose â&#x20AC;&#x201C; improving customer satisfaction. As we have seen in Chapter 12, customer satisfaction improvement is best achieved by focusing on only one or a very small number of PFIs, since trying to make improvements across the board results in changes being too small for customers to notice and to change their attitudes. When 15 to 20 customer requirements are measured in a CSM survey, it is often possible to create arguments for taking action on quite a number of them. Some may have low satisfaction scores, some will be extremely important to customers, some will have high impact coefficients, others will have attracted some very negative customer comments. Suggesting too many PFIs, even reviewing too many options for potential PFIs will almost always be detrimental to the organisationâ&#x20AC;&#x2122;s success in improving customer satisfaction. Therefore, even when one could construct a credible case from the data for many of the requirements being PFIs, it is essential to select one or a very small number, using judgement if necessary as a tie-breaker, and authoritatively present them to management and colleagues as the conclusions and recommendations. KEY POINT Clear, unambiguous conclusions, authoritatively presented, are essential for improving customer satisfaction and loyalty. 15.2.3 Justifiable conclusions The fact that conclusions should be clear and authoritative rather than too lengthy or open to debate does not take away the need to justify them. To believe in them, managers need to understand the logic behind focusing their customer satisfaction improvement efforts and resources on a very small number of PFIs. As well as explaining the rationale behind this approach, each PFI should be justified from the survey outcomes. It might have the largest satisfaction gap, the most negative comments, or possibly be a fundamental satisfaction maintainer where performance and customer satisfaction are not reaching the basic minimum level required. The table of outcomes shown in Figures 12.5 and 12.8 is an excellent concise and visual method for justifying the choice of PFIs. 15.2.4 Precise, actionable PFIs To have the best chance of improving customer satisfaction, managers need to make the changes that will be most valued by customers. It will be much easier for them to do this if the survey generates very specific, tangible actions that are easy to
Chapter fifteen
5/7/07
10:02
Page 255
Using surveys to drive improvement
implement rather than very broad areas for improvement. For example, saying that supermarket floor staff need to be more helpful to customers is rather vague since it does not clarify how they should be more helpful. It is much more useful to recommend that when a customer asks for help in locating a product, they should lead the customer to the product, ascertain precisely what the customer wants, place the required item(s) in the trolley and ask if the customer needs any more help. 15.3 Accuracy versus actionability To maximise the actionability of customer survey outcomes, managers will understandably want to fill the questionnaire with highly specific questions, like the following examples. “If you asked for help to find a product, did the assistant take you to the correct location?” “Did the assistant ask you if you needed any more help with anything else?” There are three major problems with this approach. 1. Questions like those above are dichotomous not scalable, so cannot be used to provide a measure of customer satisfaction. Without a trackable measure of customer satisfaction it will be impossible to judge unequivocally if the organisation is making progress in improving customer satisfaction. 2. To cover the entire customer experience with highly specific questions like the examples shown would result in a questionnaire that was very much longer than the ten minutes recommended in this book. 3. Most seriously, questions like the ones above are typical ‘lens of the organisation’ as opposed to ‘lens of the customer’ questions. As we explained in Chapter 4, when judging organisations, customers simply do not think in such specific, operational terms. Their attitudes are based on much more general impressions, such as how helpful or unhelpful the employees typically are. KEY POINT Highly specific questions aid actionability but often conflict with the lens of the customer. As we have emphasised earlier, for an accurate measure of customer satisfaction that truly reflects how satisfied or dissatisfied customers feel, the questionnaire must be based on exactly the same criteria that customers use to make that judgement. Hence the exploratory research to understand the ‘lens of the customer’ as the basis for designing the questionnaire. The disadvantage of the ‘lens of the customer’ approach is that the broad constructs used by customers to form their satisfaction judgements will, by definition, tend to be rather general and therefore not very actionable. However, there is a way to retain the accuracy of the ‘lens of the customer’ measure whilst also giving managers the actionable outcomes that they want and need, as explained in the next section.
255
Chapter fifteen
256
5/7/07
10:02
Page 256
Using surveys to drive improvement
15.4 Customer Experience Modelling The examples given in Section 15.3 provide a good illustration of the accuracy versus actionability conflict. Customers tend to judge organisations on broad perceptions such as ‘helpfulness of staff ’, but if it becomes a PFI many operational managers will not see it as an actionable outcome since it doesn’t tell them precisely how to make their staff more helpful. By contrast, the much more specific questions on helpfulness are totally actionable, so the starting point for CSM is to use the time available in the interview for additional questions to insert some highly specific, actionable questions just on the PFIs. However, Customer Experience Modelling (CEM) involves far more than simply inserting a few additional questions. It actually adds three benefits to a CSM process: 1. It makes the survey outcomes more actionable. 2. It helps the organisation to monitor progress, especially when frequent tracking will not demonstrate a significant uplift in customer satisfaction every month. 3. It can provide information on the return on investment in customer satisfaction whether on specific actions taken or the return on generally improving customer satisfaction. We will start with the actionability, firstly from the perspective of questionnaire design and then in terms of turning the answers into actionable outcomes. 15.4.1 Designing CEM questions Since the first purpose of CEM is to provide additional, more focused information around the PFIs to make the outcomes more useful for managers, decisions about what questions to ask are crucially important. Three factors should influence the focus of CEM questions: 1. Customer comments generated from probing low satisfaction provide a good starting point for where to focus the questions. Analysing the comments for each PFI will identify the main causes of dissatisfaction with the requirement and provide excellent ideas for CEM questions. 2. Customer facing employees often have considerable insight into the problems experienced by customers and their views can be canvassed through workshops or discussion groups and more quantifiably through a mirror survey (see Chapter 16). 3. Since these are ‘lens of the organisation’ questions they must fit in with the way the organisation works if they are to produce actionable outcomes so managers should approve them for compatibility with processes, budgetary constraints and any other internal issues. Having agreed the focus of the questions, their wording becomes critical to actionability. Dichotomous questions are often the most useful in this respect since they are very tangible and are easy for customers to answer. “Did your account manager agree specific deadlines with you for each phase of the project?” “Did the customer service representative call you back on the agreed date?”
Chapter fifteen
5/7/07
10:02
Page 257
Using surveys to drive improvement
Sometimes, dichotomous questions are not precise enough. Where time is involved, you need to know how long something took, whether in minutes, weeks or whatever unit of time is appropriate. Examples include: “How long is it since you met your account manager?” “After you reported the fault, how long was it before the engineer arrived?” Questions like these should be asked as open questions in interviews with a coding frame for interviewers to classify responses into appropriate blocks of time. With selfcompletion questionnaires, it would normally be a closed question with the customer ticking one of the response options. However, as we will explain in Section 15.4.2, the response options may be determined by how it will be analysed. Another effective CEM questioning routine is to take the customer through a short sequence of events, as with the following questions designed to understand customers’ problem experience. “Have you experienced a problem with XYZ Ltd in the last 3 months (or appropriate length of time)?” Yes / No If yes: “Please give brief details of the problem.” Interviewer can code responses into the most common categories e.g. “produce not fresh / faulty or substandard product (other than fresh produce) / unavailability of products / poor or off-hand service provided in-store / etc” “Did you report the problem to anyone at XYZ?” Yes / No If yes: “Who did you report it to?” (Appropriate response options here e.g. “Store assistant / Customer Service Desk in store / Telephoned the customer helpline / Made a formal complaint to a manager”) If no: “Why didn’t you report it?” (Appropriate response options here e.g. “Didn’t know who to report it to / Don’t like complaining / Didn’t think anything would be done about it / Problem discovered at home, difficult to report”) “How satisfied or dissatisfied were you with the way your problem was handled?” This is most usefully scored on the same 10-point scale as all the other satisfaction questions. 15.4.2 Driving actions A key benefit of CEM is the provision of clear and simple results that link directly into things that employees are doing, or not doing. Dichotomous questions are particularly good for producing black and white, irrefutable information. A zero defects policy for the two behaviours reported in Figure 15.1 seems reasonable customer satisfaction practice, providing managers in the companies concerned with tangible information for taking action.
257
Chapter fifteen
258
5/7/07
10:02
Page 258
Using surveys to drive improvement
FIGURE 15.1 Dichotomous question outcomes No 6%
No 22%
Yes 78%
Yes 94%
Specific deadlines agreed
Called back on agreed date
Time-based questions such as the length of time since customers met their account manager, or how long it took the service engineer to arrive can be analysed in simple or complex ways. For actionability, simplicity is best, so the responses should be grouped into three or four time bands as shown in Figure 15.2. Clearly, if it is policy that all customers should see their account manager at least quarterly, or that service engineers should always visit within 3 days of a fault being reported, the organisations and managers involved in these examples have work to do. The actionability of CEM questions can be further improved by flagging customers by account manager, service teams etc so that the individuals or teams failing to meet the targets are highlighted. The simple act of reporting and monitoring this information usually stimulates significant performance improvement. FIGURE 15.2 Time-based questions More than 3 days 9% 28% More than 3 months
32% Less than 1 month
40% 1- 3 months Last saw account manager
27% 3 days
26% Less than 24 hours
38% 2 days Speed of engineerâ&#x20AC;&#x2122;s visit
KEY POINT For maximum actionability link CEM questions to specific individuals or teams. When there is a sequence of questions like the problem handling examples, it enables managers to identify precisely where failures are occurring to a level of detail that may
Chapter fifteen
5/7/07
10:02
Page 259
Using surveys to drive improvement
surprise even the employees doing the work. Answers to the individual questions can be reported as pie charts, but CEM becomes a more powerful tool when questions are linked together and reported as a flow chart. If several questions comprise the sequence, it is important not to overload the audience with too much detail. It is preferable to be selective, including just the questions necessary to make an actionable point. Figure 15.3 illustrates the useful outcome that whilst customers report most problems they seem to be reluctant to report service problems. FIGURE 15.3 CIM flow chart (1) Problem experienced
Yes 42% Sample = 3000 customers
No 58%
Problem with
Problem reported
Fresh produce 38%
Yes 73%
Other products 13%
Yes 94%
Unavailability of product 21%
Yes 76%
Poor service 28%
Yes 32%
If there is a reluctance to report problems it is clearly useful to understand why. A good CEM question sequence might show that there is no simple answer. As illustrated in Figures 15.4 and 15.5, the reasons might vary according to the nature of the problem. FIGURE 15.4 CIM flow chart (2) Service problems reported Yes 28%
Sample = 353 customers
No 72%
Why not reported Didn’t know who to report it to 6% Don’t like complaining 11% Didn’t think anything would be done 83% Problem discovered at home - hassle to report 0%
259
Chapter fifteen
260
5/7/07
10:02
Page 260
Using surveys to drive improvement
Since it is desirable for any problem to be reported and handled as soon as possible after its occurrence, the company needs to convince customers that the in-store Customer Service Desk welcomes customers’ feedback on any type of problem or concern and that all will be taken very seriously whether product or service related. FIGURE 15.5 CIM flow chart (3) Fresh produce problems reported Yes 73%
Sample = 479 customers
No 27%
Why not reported Didn’t know who to report it to 11% Don’t like complaining 9% Didn’t think anything would be done 17% Problem discovered at home - hassle to report 63%
The situation with fresh produce problems is different. The minority that are not reported are mainly due to the fact that once back home, customers are not clear about how to report them unless they go back to the store, which they may not be doing for another week or more. Raising awareness of the telephone helpline is the obvious solution here and may also help for service problems if some customers prefer to report service problems using a less personal medium rather than face-toface in-store. 15.4.3 Focusing PFI actions The actionability of CEM outcomes is obvious, but the technique also enables organisations to make decisions about whether taking action is worthwhile. Whatever the policies of the organisation about maximum response times or the percentage of customers that should receive a follow-up call, there’s no point investing in service enhancements that don’t improve customer satisfaction. As we said in Chapter 12, selection of PFIs should be based mainly on where the organisation is least meeting its customers’ requirements, but we know from Chapter 14 that some requirements can make more impact than others on improving customer satisfaction due to the asymmetry of CSM data. The same principle applies to the very focused action that will be taken to address the chosen PFI(s). If our
Chapter fifteen
5/7/07
10:02
Page 261
Using surveys to drive improvement
retailer had adopted â&#x20AC;&#x2DC;handling problems and complaintsâ&#x20AC;&#x2122; as its PFI, Figures 15.6 and 15.7 show how CEM could help it to determine specific actions that would have more impact on improving customer satisfaction. FIGURE 15.6 Using CEM to focus PFI actions (1) Service problem reported Yes 28%
Satisfaction Index
Definitely use XYZ for next shop
79%
88%
76%
87%
Sample = 353 customers
No 72%
Whilst initial analysis suggested that improving the reporting rate for service problems would be useful, Figure 15.6 demonstrates that it would not make much impact on customer satisfaction and loyalty, since in this example, customers not reporting service problems are almost as satisfied and loyal as those who do. By contrast, Figure 15.7 shows that although failure to report problems with fresh produce appears to be a much smaller issue, it actually makes far more impact. Customers who find it to be less than fresh when they arrive home and end up throwing it away rather than reporting it change their attitudes and future behaviour towards the store to a much greater degree, so promoting the use of the telephone helpline for such problems would be the best action to take. FIGURE 15.7 Using CEM to focus PFI actions (2) Fresh produce problem reported Yes 73%
Satisfaction Index
Definitely use XYZ for next shop
86%
97%
67%
84%
Sample = 479 customers
No 27%
261
Chapter fifteen
262
5/7/07
10:02
Page 262
Using surveys to drive improvement
15.4.4 Monitoring improvement The tangible nature of CEM outcomes makes it easy for managers to set targets for improvement as well as enabling them to judge whether agreed actions and policies are being implemented, and, more importantly, noticed by customers. Based on the information in Figure 15.2, the organisation clearly has a problem with 27% of visits from service engineers taking longer than the specified maximum of three days from the fault being reported. Whilst it may not be realistic to expect immediate implementation of the three-day maximum, managers can set tangible targets for reductions in the percentage of visits exceeding three days and can use CEM to monitor the company’s progress – as perceived by customers. FIGURE 15.8 Monitoring progress 30%
20% % of visits over 3 days from reporting fault 10%
0%
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
The ability to monitor progress is a very useful feature of CEM, especially when continuous tracking may not demonstrate a significant uplift in customer satisfaction every month. This is particularly helpful with very broad customer requirements such as ‘keeping promises and commitments’, ‘value for money’ or ‘ease of doing business’ where it will be slow and challenging to improve customers’ perceptions. CEM will considerably help to address the problem of the same nonchanging PFIs by providing visible progress on a sequence of small manageable steps towards addressing the PFI. KEY POINT Well designed CEM questions provide early feedback on the effectiveness of organisations’ satisfaction improvement initiatives.
Chapter fifteen
5/7/07
10:02
Page 263
Using surveys to drive improvement
15.4.5 Demonstrating return on investment A very important purpose of CEM is to provide information on the return on investment in customer satisfaction whether the return on specific actions taken or the return on generally improving customer satisfaction. Figure 15.9 is an example of using CEM for this purpose. In this example it shows that reducing the percentage of overdue service visits results in a big improvement in satisfaction with ‘speed of service’, a good improvement in overall customer satisfaction and, most importantly, an excellent improvement in loyalty as illustrated by intention to renew the contract. FIGURE 15.9 Using CIM to demonstrate return on investment Date
January 2006
% service visits over 3 days after fault reported
Satisfaction with speed of service
Customer satisfaction index
% will definitely renew contract
27%
6.1
79.2%
44%
5%
8.4
81.7%
59%
1000 customers interviewed on both dates January 2007
Sometimes, actions taken by organisations will not translate into improved customer perceptions and on other occasions will not provide a financial return by driving customers’ loyalty behaviours. CEM enables organisations conducting monthly tracking surveys to fine tune actions, continuing with those that are making an impact until the rate of return levels off but abandoning any actions that are clearly not making a difference so that resources can be switched to a different area. CEM will also help managers to keep employees motivated by providing quick feedback on efforts they have made as well as enabling them to report tangible improvements to senior management. KEY POINT CEM provides simple and convincing evidence of whether actions to improve customer satisfaction are also increasing profit. Although CEM’s ability to visibly link customer satisfaction improvement with business outcomes is of most value to commercial companies, it can also be useful to not-for-profit organisations provided appropriate business outcomes are used in the
263
Chapter fifteen
264
5/7/07
10:02
Page 264
Using surveys to drive improvement
model. All organisations are interested in cost control, and as we pointed out in Chapter 2, the cost of servicing customers normally increases as their satisfaction falls. Reputation and trust are also consequences of satisfaction that are desirable in the public as well as private sectors. Using a complaint handling example, Figure 15.10 shows a CEM model of relevance to profit and not-for-profit organisations. FIGURE 15.10 Demonstrating business impact in not for-profit sectors Had a problem
Problem resolution
Number of calls
Total call duration
Cost to service
Yes 38%
Satisfied with problem handling 40%
Average 1.9 calls
Average 17.1 minutes
£12.83 per customer
Partially satisfied with problem handling 19%
Average 3.1 calls
Average 46.5 minutes
£34.88 per customer
Dissatisfied with problem handling 26%
Average 6.2 calls
Average 136.4 minutes
£102.30 per customer
Did not report problem 15%
Average 0.7 calls
Average 7.7 minutes
£5.78 per customer
Average 0.4 calls
Average 3.6 minutes
£2.70 per customer
Sample of 2000 customers
No 62%
Since most organisations with contact centres have call logging data matched with individual customer codes, it is a simple task to look at the number of telephone contacts with each customer and calculate the average number of calls per customer in each category. If the information is available, an even more accurate figure would be the total duration of calls averaged in each category. This would be the total time spent by the call handler including, for example, writing up time after the call. For organisations making outbound sales calls, the figures might have to be based on inbound calls only, although the system would ideally distinguish between sales calls and customer service calls, in which case only the former should be excluded. In reality, there would often be other costs associated with servicing customers such as writing to complainants or dealing with problems face-to-face. If information is available on all relevant costs it should obviously be used in the model, but if not, the call log data used in Figure 15.10 would illustrate the point, albeit under-estimating the real cost of customer dissatisfaction. The final step is to incorporate the cost of all this time spent servicing customers. This should obviously be total costs, not salaries alone. Based on a call centre hourly cost of £45, the chart shows that, for the organisation concerned, the cost
Chapter fifteen
5/7/07
10:02
Page 265
Using surveys to drive improvement
of servicing customers who were dissatisfied with the way their problem was handled averaged over £100 per customer compared with only £12.83 for customers whose problem was handled well. Best of all, of course, at only £2.70, is the cost of servicing customers who didn’t have a problem in the first place. KEY POINT CEM can also link customer satisfaction improvement to relevant business outcomes in not-for-profit sectors. Since customers who did not report their problem cost only £5.78 to service, some people might suggest that encouraging customer complaints was not a sensible policy. There will however be other factors to consider. Commercial companies should include information about relevant loyalty behaviours, such as propensity to renew the contract or policy, likelihood of buying more products, or simply remaining a customer. All organisations can link the information with reputation and trust information as shown in Figure 15.11. FIGURE 15.11 The impact of complaint handling on satisfaction and trust Trust to look Satisfaction after my interests Willing to Index as a customer recommend
Had a problem
Problem resolution
Yes 38%
Satisfied with problem handling 40%
83.1%
89.2%
95.4%
Partially satisfied with problem handling 19%
67.4%
66.4%
59.8%
Dissatisfied with problem handling 26%
52.3%
38.7%
23.4%
Did not report problem 15%
61.5%
42.0%
34.8%
82.8%
87.3%
85.9%
Sample of 2000 customers
No 62%
It is now clear to see that customers who did not complain about their problem have lower levels of overall satisfaction and trust the organisation less than any group apart from those with a badly handled problem. It is also possible to quantify the impact on the reputation of the organisation by adding the following questions to the CEM sequence: “Have you spoken to anyone else about XYZ organisation in the last three months?”
265
Chapter fifteen
266
5/7/07
10:02
Page 266
Using surveys to drive improvement
“Approximately how many people did you speak to?” “What kind of things did you say about XYZ?” Interviewer to code as positive, negative or neutral. FIGURE 15.12 Reputation damage Average number Net positive or negative of people spoken to word of mouth
Had a problem
Problem resolution
Yes 38%
Satisfied with problem handling 40%
4.3
+98.4%
Partially satisfied with problem handling 19%
5.7
-23.8%
Dissatisfied with problem handling 26%
9.6
-97.9%
Did not report problem 15%
4.6
-77.3%
3.1
+86.5%
Sample of 2000 customers
No 62%
The information shown in Figure 15.12 can be used to calculate the extent to which problems or poor complaint handling are damaging the reputation of the organisation. Focusing on the customers who did not report the problem, and assuming the organisation has one million customers, the calculation is shown in Figure 15.13: FIGURE 15.13 Calculating reputation damage 1,000,000 customers
X
38% had a problem
=
380,000 customers
380,00 customers
X
15% did not report problem
=
57,000 customers
57,000 customers
X
4.6 people spoken to
=
262,200 conversations
262,000 conversations
X
77.3% net negative
=
202,681 negative messages
REPUTATION DAMAGE = 202,681 NEGATIVE MESSAGES
Chapter fifteen
5/7/07
10:02
Page 267
Using surveys to drive improvement
KEY POINT The reputation damage of negative customer experiences can be quantified.
Conclusions 1. Since CSM has only one purpose, to improve customer satisfaction, it is wasteful and even detrimental to make available the huge volume of data that can be produced from a typical customer survey. 2. Information reported to managers must be brief so donâ&#x20AC;&#x2122;t waste time reporting facts and figures (however interesting) that do not contribute directly to busy managersâ&#x20AC;&#x2122; ability to improve customer satisfaction. 3. Some background, methodological detail will be necessary when reporting annual surveys. 4. When reporting continuous tracking, managers must be given enough information to understand trends, and hence judge progress, for individual PFIs as well as for overall satisfaction. 5. It is essential to be authoritative as well as concise when presenting recommendations for PFIs to management. 6. PFI selection should be justified and the Table of Outcomes is very helpful for this purpose. 7. Precise, tangible PFIs give managers the best chance of improving customer satisfaction but can be incompatible with a survey based on the lens of the customer since customers typically judge organisations on broad, less actionable constructs. 8. Customer Experience Modelling (CEM) is the way to solve this problem since it allows the satisfaction measure to be based on the lens of the customer, thus providing an accurate reflection of how satisfied or dissatisfied customer feel, whilst using the additional questions to produce highly actionable information. 9. CEM results offer a solution to the difficulty of monitoring progress for organisations that continually track customer satisfaction. 10. Since it is worth investing only in improvements that make a difference, CEM can be used to demonstrate the return to the organisation from achieving specific service improvements. This enables managers to make fact-based decisions about whether to invest in further improvement in the same area or whether to switch emphasis to different actions.
267
Chapter sixteen
268
5/7/07
10:03
Page 268
Involving employees
CHAPTER SIXTEEN
Involving employees A CSM process will not achieve its main goal of improving customer satisfaction unless employees are completely on board. There are some obvious factors around keeping staff informed about what’s happening, such as when customers are being surveyed and how they are being surveyed, e.g. in-house or independently, interviews or self-completion. This chapter will focus on some specific initiatives that have been shown to increase employees’ feeling of involvement and, consequently, to enhance the organisation’s ability to improve customer satisfaction.
At a glance This chapter will: a) Explain how a mirror survey will make a very tangible difference to employees’ involvement in the survey as well as identify ‘understanding gaps’. b) Suggest ways of ensuring that employees will feel that the survey relates to them and their work. c) Explain how to effectively feed back the results of the survey to employees. d) Consider the advantages of involving staff in decisions about how to improve customer satisfaction. e) Examine ways of using reward and recognition as the ultimate technique for involving employees in the process. f) Explore the concept of internal customers and its role in delivering external customer satisfaction.
16.1 The Mirror Survey While carrying out a customer survey it can be very enlightening to survey employees at the same time as customers to identify ‘understanding gaps’ – areas where staff do not accurately understand what’s important to customers or fail to realise that the level of service they provide is not good enough. This exercise is known as a ‘mirror survey’. Studies have identified strong correlations between this type of employee communication, the development of a service oriented culture and subsequent improvement in customer satisfaction1,2,3.
Chapter sixteen
5/7/07
10:03
Page 269
Involving employees
16.1.1 Administering the survey A mirror survey involves administering a slightly modified version of the customer questionnaire to employees. Exactly the same customer requirements are measured but a mirror survey questionnaire asks employees: “How important or unimportant do you think these requirements are to customers?” And: “How satisfied or dissatisfied do you think customers are with our performance in these areas?” A mirror survey is normally based on a self-completion questionnaire on paper or in electronic form. If paper-based, it should be given out and collected back in from employees to achieve the highest possible response rate. To preserve confidentiality and ensure honest answers, the questionnaire should be anonymous and employees should be provided with an envelope to seal it in so that their response cannot be read by anyone collecting it. An electronic survey would usually be conducted on the organisation’s intranet, but if using this method, extensive communications will be necessary to achieve a high response rate. It is also very useful to include a comments box that employees can use to highlight any barriers that hinder their ability to deliver customer satisfaction and to make suggestions for improvements. KEY POINT Make a mirror survey anonymous and implement measures to achieve a response rate of at least 50%. Unlike an employee satisfaction survey, where, even in the largest organisations, a census is normal for reasons of inclusiveness, a mirror survey does not need to incur the cost of a census. For data reliability purposes, a minimum sample of 200 responses is adequate, with at least 50 in any sub-groups such as departments. As with all self-completion surveys, the response rate is also critical. The target for a mirror survey should be at least 50%. Analytical techniques are the same as those already explained in Chapter 10, with the key outputs shown in Figures 16.1 and 16.2. 16.1.2 Understanding customers’ requirements Using the same results for the supermarket that we have seen earlier, the chart in Figure 16.1 shows the difference between the customers’ mean score for the importance of each requirement and the average score given by employees. Employees think that most things are important to customers, scoring almost everything more highly for importance than the customers did. This is very healthy because if employees think a requirement is at least as important as the customers do,
269
Chapter sixteen
270
5/7/07
10:03
Page 270
Involving employees
they should be giving it sufficient attention. However, alarm bells should sound when employees under-estimate the importance of a customer requirement. The chart shows that employees significantly underestimate the importance of ‘expertise of staff ’, scoring it 0.8 lower than customers. Further inspection shows that employees gave broadly similar scores for all three staff requirements, demonstrating that they fail to understand the additional importance that customers place on their expertise compared with their helpfulness and appearance.
FIGURE 16.1 Importance mirror 6.5
7
7.5
8
8.5
9
9.5
10
Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness Staff appearance
Customers Employees
16.1.3 Understanding customer satisfaction The second mirror survey chart, shown in Figure 16.2 shows the difference in mean scores for satisfaction given by customers and employees. The chart provides some very interesting information as it shows that where employees perceive any difficulty in meeting customers’ requirements they appear to assign responsibility to the company, scoring low for such corporate issues as ‘choice of products’, ‘price’, ‘quality of products’ and ‘layout of store’. On all the requirements relating to their own behaviour, they over-estimate customer satisfaction, especially for ‘helpfulness of staff ’, where the average satisfaction score given by employees was 1.2 higher than the one given by customers and for ‘expertise of staff ’ where employees score 0.9 higher.
Chapter sixteen
5/7/07
10:03
Page 271
Involving employees
FIGURE 16.2 Satisfaction mirror 6.5
7
7.5
8
8.5
9
9.5
10
Choice of products Expertise of staff Price level Speed of service Quality of products Layout of store Staff helpfulness
Customers Employees
Staff appearance
As well as highlighting understanding gaps on specific attributes, a mirror survey will sometimes uncover a much deeper malaise in the organisation. Whilst employees in some organisations have an incredibly accurate understanding of customersâ&#x20AC;&#x2122; needs and perceptions, others can display a woeful misunderstanding across the board. For example, when employees give satisfaction scores that are consistently higher than those given by customers, it indicates a degree of unhealthy complacency across the organisation. By contrast, if employees give significantly lower satisfaction scores than customers for all the attributes, it is a sign of poor staff morale. KEY POINT A mirror survey will often present an enormous opportunity for staff training, providing tangible information to demonstrate key points to employees. Even if the mirror survey does not identify any understanding gaps or highlight any wider problems within the organisation, taking part in the survey is a very tangible way of involving employees in the CSM process and making them think about the issues of importance to customers. Once the results have been analysed, employees find it very interesting to compare their scores with those given by customers and this added interest helps to facilitate the internal feedback process.
16.2 Relating surveys to work Most people in all organisations are focused on the day-to-day requirements of their
271
Chapter sixteen
272
5/7/07
10:03
Page 272
Involving employees
job. Consequently, if CSM is to maximise its effectiveness across the organisation, it must be apparent to everyone how it relates to their daily work. Communications to feed back the survey results, the PFIs and the satisfaction improvement plans are highly beneficial, but they are not ever-present and will not be at the top of most employeesâ&#x20AC;&#x2122; minds on a daily basis. Two techniques that will help employees to relate the surveys to their own jobs are outlined in the next two sub-sections. 16.2.1 Survey results Where staff have direct contact with customers, it is highly beneficial if the survey results can be drilled down to the lowest possible level. In Chapter 12 we emphasised the effectiveness of internal benchmarking across business units, branches etc. In this context it is highly effective to break down the results to individual members of staff where possible, or at least to small teams as well as larger units such as call centres, stores or branches. Assuming the database links customers to the individual employee or team that handled their call or manages their account, the main issue is sample size. Provided the results are seen as indicative performance indicators, this is a worthwhile exercise with samples as low as 10 respondents, although 25 would be preferable. For organisations involved in continuous tracking of customer satisfaction, it will be possible to roll up the samples of individuals/teams over several months to improve the reliability of the data. An example of the type of output that can be produced is shown in Figure 16.3. FIGURE 16.3 Satisfaction scores by individual Understanding your requirements Clear points of contact Expertise of account manager The relationship with CM Professionalism of account manager Proactivity of account manager Helpfulness of account manager Presentation skills of account manager Scheduling of projects Project management Feedback on project progress Speed of response to requests Quality of designs Quality of advice Handling problems or complaints Value for money Customer Satisfaction Index
JC 9.05 9.10 9.38 9.43 9.33 9.38 8.52 9.21 9.11 8.43 8.52 8.71 8.52 8.62 8.93 8.86
LH 8.84 9.08 9.56 8.76 9.08 8.92 8.71 9.11 8.71 9.16 9.40 8.96 9.08 8.79 8.35 9.38
Min 8.60 8.90 9.11 8.68 8.90 8.70 8.00 9.00 8.37 8.43 8.52 8.30 8.39 8.40 8.21 8.69
Difference 0.73 0.54 0.45 0.75 0.43 0.68 1.13 0.50 1.01 0.90 0.88 0.66 0.94 0.49 0.73 0.98
89.9% 90.5% 88.9% 87.5% 91.4% 87.3% 91.4% 87.3%
4.10
BD 8.66 8.93 9.24 9.34 9.14 9.38 8.62 9.05 8.91 8.48 8.93 8.71 9.00 8.65 8.21 8.69
HW 8.70 8.91 9.39 8.68 9.00 9.18 8.45 9.17 8.37 8.50 8.87 8.30 8.39 8.84 8.30 9.00
CP 9.33 9.44 9.44 9.00 9.25 9.38 9.13 9.00 9.38 9.33 8.78 8.67 9.33 8.89 8.25 9.67
AJ 8.60 8.90 9.11 8.70 8.90 8.70 8.00 9.50 8.56 8.78 8.67 8.78 8.40 8.40 8.30 8.80
Max 9.33 9.44 9.56 9.43 9.33 9.38 9.13 9.50 9.38 9.33 9.40 8.96 9.33 8.89 8.93 9.67
This kind of internal benchmarking will be most effective if it is undertaken in a positive manner without any hint of blame or recrimination attached to those with
Chapter sixteen
5/7/07
10:03
Page 273
Involving employees
the lowest scores. However, it is clear from Figure 16.3 that even though these scores are all very high, demonstrating an excellent level of customer satisfaction, there is a large variation in staff performance regarding ‘helpfulness of staff ’ and ‘scheduling of projects’. It would clearly be beneficial to see what HW and AJ could learn from CP to improve their performance on these two customer requirements. Addressing these areas would make a big contribution to HW and AJ bringing their overall customer satisfaction indices closer to CP’s index. KEY POINT Internal benchmarking is very powerful and will be more effective if the results are drilled down to the lowest possible level. 16.2.2 Customer comments There will also be customer comments, especially if telephone interviews are used and low satisfaction scores are probed. Employees will usually be even more interested in the comments made by their own customers than they are in the scores. Comments are particularly useful for demonstrating to employees why they have recorded low scores for some requirements. In business-to-business markets the comments will be even more meaningful if attributed to specific customers but this must be done only with respondents’ permission. There is an argument that if the comments focus solely on reasons for low satisfaction scores it may be de-motivating for the staff concerned. This can be overcome by also probing top box scores so that employees will also understand what delights customers as well as what upsets them. Of course, it is important that the amount of probing does not unreasonably extend the interview length, so for organisations with very high customer satisfaction, a single open question about their account manager, consultant, customer service advisor etc. would be more appropriate. An example of a suitable question would be: “Taking everything into account, how would you describe the service provided by your account manager and what could he/she do better?”
16.3 Employee communications Feeding back the results to all employees is an essential element in the long-term health of a CSM programme. Little action will be taken to improve customer satisfaction if employees don’t know enough about the results or their implications. The extent of the feedback provided to employees will also send messages about how important the customer survey is to the organisation. Many studies have shown the importance of suitable employee communications in building a service-focussed climate4,5,6. Results can be communicated internally through a variety of media such as staff magazines or newsletters, notice boards, intranet and e-mail. A more effective method is to present the results personally, preferably to all employees but at least to all those who have an important role in delivering customer satisfaction. It is true that for larger organisations, face-to-face feedback such as workshops or road-shows for large numbers of employees
273
Chapter sixteen
274
5/7/07
10:03
Page 274
Involving employees
will be a costly exercise. However, the financial benefits of improving customer satisfaction and loyalty will almost always justify the costs involved. KEY POINT The extent of CSM feedback will send a key message to employees about the importance of customers. A suggested agenda for an internal presentation is shown in Figure 16.4. The session should start by demonstrating that the survey was professionally conducted and therefore provides a reliable measure â&#x20AC;&#x201C; in short that the right questions were asked to the right people. It is therefore important to explain the exploratory research that led to the design of a suitable questionnaire based on the lens of the customer as well as explaining the robustness of the sample. KEY POINT Extensive internal feedback of the results will justify its costs by increasing the customer focus of the organisation. The results should then be presented, especially the importance scores, the satisfaction scores, the gap analysis and the satisfaction index. As suggested in Chapter 12, it is also helpful to put the satisfaction index and the satisfaction scores for the individual requirements into context and demonstrate to employees how their performance compares with that achieved by other organisations. FIGURE 16.4 Internal feedback Internal feedback of CSM results 1. Questionnaire Exploratory research 2. Sampling Representative of customer base Random - without bias 3. Survey results Importance scores Satisfaction scores Gap analysis Satisfaction Index Benchmarking 4. Ideas for action Short term Long term
Chapter sixteen
5/7/07
10:03
Page 275
Involving employees
Finally the workshop should look to the future, initially by reiterating the importance of the PFIs and then by taking the opportunity to invite ideas about how the PFIs might be addressed. Time permitting, it is very useful to break employees into small groups to discuss the issues. Ask them to brainstorm ways in which the PFIs could be addressed. Having generated a list of ideas, they should be sorted into two categories, those that could be implemented easily, quickly and at low cost (the quick wins) and those that are longer term on grounds of cost or difficulty. Employees should then select the ones they consider to be the best three short term and best three long term ideas, and be prepared to present their suggestions to the rest of the workshop. This will result in a large number of ideas for action. The selection process can be taken a step further by asking everybody to score the full list of ideas in order to identify the best overall short term and long term ideas. Apart from the fact that employees, who are close to the action, will often think of good ideas which would not have occurred to management, the great advantage of this approach is that employees are far more likely to enthusiastically embrace a service improvement programme that they have helped to shape rather than one which has simply been handed down by management.
16.4 Reward and recognition It has been well documented that employees are more motivated if their efforts are recognised and rewarded7,8. Recognition is often achieved in simple ways, such as thanking an employee who has worked hard to deliver good service to customers. More public forms of recognition such as a small monthly prize for the employee who has received the best customer commendation, or colleague nomination for great service, can also work well in some organisations. Public recognition of teams, departments and the whole company will also be beneficial to celebrate success in increasing customer satisfaction, partly to recognise employees’ hard work and also to demonstrate that it’s important to management. The best way to emphasise the importance of customer satisfaction and to motivate employees to improve is to introduce an element of customer satisfaction-related pay. However, as with all aspects of employees’ remuneration, any kind of customer satisfaction-related bonus will come under heavy scrutiny, so its basis needs to be carefully considered before introduction. 16.4.1 Basis of the scheme Two methods are commonly used for determining customer satisfaction-related pay. The first is a company-wide scheme that pays a bonus to all employees based on one measure (typically the company’s customer satisfaction index). The bonus could be a flat rate sum for all employees or a fixed percentage of salary. Either way, it is perceived as a simple and transparent system that is the same for everybody. Its disadvantage is that some employees may not perceive all colleagues as making an equal impact on the company’s ability to achieve the customer satisfaction target9. However, this can be addressed by using action mapping (see Figure 12.9), cross
275
Chapter sixteen
276
5/7/07
10:03
Page 276
Involving employees
functional teams and by focusing on the importance of satisfying internal customers as an essential step towards satisfying external customers (see Section 16.5). The alternative model would be based on team or departmental specific schemes. It can even be appropriate to have individual customer satisfaction-related pay for some roles such as account managers. Bespoke schemes can be very flexible, but bonus targets would typically be based on customer satisfaction with requirements that can be affected by the department concerned. Departments with little or no customer contact can have a bonus based on the overall customer satisfaction index or, preferably, on their ability to satisfy their own internal customers. The big advantage of this second model is that customer contact staff will see the bonus as much more closely linked to their day-to-day work. The disadvantage is that some employees who think other departments have more favourable schemes may see the system as unfair . KEY POINT Customer satisfaction-related pay will do more than anything to demonstrate the importance of customer satisfaction to the organisation. The existence of a scheme is more important than its details. There is no simple answer to the appropriateness of these two models to a specific company. Often it will depend on the organisational structure and culture of the individual company. However, a good rule of thumb is that a company-wide scheme will usually be most suitable when customer satisfaction-related pay is first introduced because it is clear and simple, and the fact that it is the same for everyone will be an advantage at the outset. As time passes, employees will become more familiar with the efforts required to improve customer satisfaction and the varying roles played by different individuals and teams. As customer satisfaction increases it will also become more difficult to improve it further. For both these reasons moving to a team or even an individual-based scheme, where appropriate, will become more effective as the scheme matures. 16.4.2 Targets The targets are as important as the basis of the scheme itself. As well as being extremely visible, targets must be sufficiently ambitious to benefit the company whilst still being achievable9. It will be very de-motivating if targets are missed, especially in the first year. This may result in the scheme ceasing to motivate employees. Unfortunately, this often happens as senior management has a tendency to set unrealistically high targets for improving customer satisfaction. As mentioned earlier in this book, increases in customer satisfaction are usually quite small and achieved only with much effort. Consequently, an average annual improvement of 2% in the customer satisfaction index, if sustained over several years, would be a very good achievement and an ambitious target for a customer satisfaction-related pay scheme.
Chapter sixteen
5/7/07
10:03
Page 277
Involving employees
KEY POINT Don’t set unrealistically high targets for customer satisfaction improvement. 16.4.3 Frequency of measures The frequency of survey will dictate how often the organisation can pay customer satisfaction-related bonuses. The advantages of annual payments are that they can be linked in with employees’ performance appraisals and/or pay reviews and with annual business planning cycles for developing, justifying and implementing customer satisfaction improvement plans. The disadvantage is that an annual bonus may not motivate employees on a daily basis. This can be addressed through a planned programme of communications to keep the spotlight on customer satisfaction. For a typical B2B company with a relatively small customer base, annual surveys and bonuses will usually be most appropriate. A smaller scale interim tracker survey slightly more than half way through the year will be useful to indicate if the company is on course to meet the target and achieve the bonus. If it isn’t improving, and provided at least four months remain before the annual survey, there should still be time to make renewed efforts to address the PFIs and make a positive impact on customer satisfaction. In a very service-intensive environment, such as a call centre, more frequent measures and bonuses will usually be more effective. The customer base and throughput of calls need to be sufficiently large, but will be in most call centres and B2C businesses. American bank card issuer, MBNA, measures and bonuses customer satisfaction on a daily basis10. Their customer satisfaction index is based on telephone interviews and is displayed every day for the previous day’s activity, providing immediate feedback to employees. Every day that the index is above target, the company contributes to a fund, which pays out the bonus on a quarterly basis. Since customer satisfaction is measured daily, it means that every day when employees arrive at work they stand a fresh chance of earning a bonus, however good or bad their customer satisfaction scores were previously. 16.4.4 Credibility of the methodology If customer satisfaction-related pay is to motivate, the measure on which the payment is based must be credible. This will depend on its statistical reliability and on the fact that it really is an accurate measure of how satisfied or dissatisfied customers feel. As we said in Chapter 6, a minimum sample of 200 responses is necessary for good reliability whether the bonus is triggered on a daily, monthly or annual basis. Since interim tracker surveys are ‘indicative’ and will not be used as a basis for bonus payments, samples can be smaller, even as low as 50 responses. However large the sample, it will not provide a suitable basis for customer satisfaction-related pay unless the survey asks the right questions. As we know from Chapters 4 and 5, this is based on thorough exploratory research. Explaining how the
277
Chapter sixteen
278
5/7/07
10:03
Page 278
Involving employees
questions were determined and the fact that the index consequently provides a true reflection of customer satisfaction is an essential step in convincing employees of the credibility of a customer satisfaction-related pay scheme.
16.5 Internal customers In most organisations, all employees contribute to customer satisfaction even if they don’t personally have direct contact with customers because they are part of the capability and culture of the organisation and they provide essential services to other departments that do interface directly with customers. Consequently, in organisations that are truly customer-focused, all employees will see themselves as delivering services to customers7,11,12. The only difference is that some will be focusing on external customers, others on internal customers. KEY POINT– Everyone in the company has customers, external or internal. Satisfying both is essential. 16.5.1 The importance of internal customers Unless customer-facing employees receive good service from support functions within the organisation, they will be seriously handicapped in their ability to deliver good service to external customers. If poor internal service continues over time, there is little hope for improving customer satisfaction since customer-facing staff will become de-motivated and will adopt the poor service culture of the organisation7. In a 1998 study by Schneider et al covering 132 bank branches, the score for ‘interdepartmental service’ was the strongest predictor of external customers’ perceptions of service quality13. This has been widely supported by other research14,15,16, and in the Service-Profit Chain, Heskett et al describe how companies like Southwest Airlines, that are renowned for external customer satisfaction, adopt measures such as staff training and team building exercises to encourage employees to focus on internal as well as external customers10. KEY POINT Employee satisfaction with internal customer service is often the main determinant of the organisation’s ability to satisfy external customers. 16.5.2 Measuring internal customer satisfaction In view of its importance, growing numbers of customer-focused organisations are beginning to measure and monitor the satisfaction of internal customers. This can be done at departmental level, e.g. the IT department measuring the satisfaction of employees using IT services. More consistent and more useful however, is to conduct an internal customer satisfaction survey across the whole organisation. Some companies would do this as frequently as quarterly, especially where many internal
Chapter sixteen
5/7/07
10:03
Page 279
Involving employees
services are outsourced, although annual or bi-annual internal customer satisfaction surveys are more common. Since customer satisfaction measurement is about understanding peoplesâ&#x20AC;&#x2122; satisfaction judgements, it makes no difference to the methodology whether customers are internal or external. Whilst some researchers have attempted to compile a list of standard dimensions for measures of internal service17, a measure that accurately reflects how satisfied or dissatisfied internal customers feel will be produced only if the questions are based on the criteria that the internal customers themselves use to judge the services. As covered in Chapters 4 and 5, exploratory research should be conducted and the questionnaire based on internal customersâ&#x20AC;&#x2122; most important requirements for each service. Surveys of internal customers can be conducted on paper, on the intranet or by telephone interviews. The points made in Chapter 7 about the advantages of telephone interviews, especially response rates and collecting detailed comments, apply equally to internal customer satisfaction surveys, although interviews will obviously be more costly than self-completion. KEY POINT If internal customer satisfaction is accurately and frequently measured, staff will be much more focused on the quality of service they provide to other employees. The only significant differences between internal and external customer satisfaction surveys will be in questionnaire design, particularly where quite large numbers of internal services are involved. In a large organisation there may be as many as 12 to 15 internal services that need to be monitored, with a section on the questionnaire for each service. If the principles outlined in Chapter 9 were strictly followed this would result in an extremely long questionnaire. It is therefore normal practice for internal customer satisfaction surveys to base the questionnaire on a much smaller number of customer requirements, typically the four to six most important requirements for each service covered. It is also advisable to conduct quantitative exploratory research, partly to ensure that the small number of requirements included for each service really are the most important ones to internal customers, and to provide importance scores without having to further lengthen the questionnaire for the main survey by asking importance. With, say 15 services and an average of five requirements scored for each one, the questionnaire will still be long â&#x20AC;&#x201C; 75 questions scored for satisfaction plus any classification questions. However, in most organisations, few people use every single service on a regular basis, so if employees are asked to score only services that they have used within the last month, respondents will, on average, only score around half of the sections, resulting in a reasonable completion time. For organisations where the questionnaire may still be too long, perhaps because they have even more services and/or employees use most of them, it is possible to split the sample, each half scoring a different set of services. Provided the sample is large enough and randomly selected
279
Chapter sixteen
280
5/7/07
10:03
Page 280
Involving employees
this will present no problems for comparability. However, when considering sample size it is necessary to bear in mind that the number of interviews conducted or questionnaires sent out must be sufficient to produce at least 200 responses for the less frequently used services. Using the importance scores generated by the quantitative exploratory research (which must be updated every three years), it will be possible to calculate a weighted customer satisfaction index and satisfaction gaps for each department/service. Due to the small number of customer requirements for each service it is advisable to highlight just one PFI for each service. As well as enhancing the service culture of the organisation, measuring internal customer satisfaction is very useful for companies wanting to introduce customer satisfaction-related pay at departmental level. Where appropriate, employeesâ&#x20AC;&#x2122; bonus will be based on the index for their department generated by the internal customer satisfaction survey.
Conclusions 1. A mirror survey involves employees in the CSM process and identifies understanding gaps, which can be very serious if staff under-estimate the importance of a customer requirement or are complacent about the level of customer satisfaction they are delivering. 2. It is very important to achieve a response rate of at least 50% for a mirror survey so employeesâ&#x20AC;&#x2122; anonymity should be protected and paper questionnaires should be collected in sealed envelopes. 3. Mirror surveys often identify staff training needs and provide great information for developing the content of training courses. Comments as well as understanding gaps are extremely useful for this purpose. 4. Employees will be more motivated to improve customer satisfaction if survey results and customersâ&#x20AC;&#x2122; comments can be attributed to them personally, or at least to small teams. 5. Extensive feedback of CSM results to all employees is an essential pre-requisite to improving customer satisfaction. 6. Customer satisfaction-related pay will be very effective in motivating staff to make efforts to improve customer satisfaction. A company-wide scheme is usually the best starting point but individual, team or departmental-based schemes will work best in the long run. 7. Targets must be achievable. For most organisations, improving the customer satisfaction index by more than 2% year-on-year is not realistic. 8. To keep the spotlight on customer satisfaction, monthly or quarterly measures are advisable in B2C markets, but for most B2B companies annual surveys are more practical. 9. Fundamental to the success of customer satisfaction-related pay will be a credible
Chapter sixteen
5/7/07
10:03
Page 281
Involving employees
CSM process that asks the right questions to the right customers, with samples of at least 200. 10. Providing good service to internal customers is essential to achieving high levels of external customer satisfaction, so many organisations have now adopted a formal process for monitoring internal customer satisfaction.
References 1. Schneider, Ashworth, Higgs and Carr (1996) "Design, validity and use of strategically focused employee attitude surveys”, Personnel Psychology 49 2. Schneider and Bowen (1985) "Employee and customer perceptions of service in banks”, Journal of Applied Psychology 70 3. Schmit and Allscheid (1995) "Employee attitudes and customer satisfaction: Making theoretical and empirical connections”, Personnel Psychology 48 4. Trice and Beyer (1993) "The cultures of work organisations”, Prentice-Hall, Englewood Cliffs, New Jersey 5. Schneider, Wheeler and Cox (1992) "A passion for service: Using content analysis to explicate service climate themes”, Journal of Applied Psychology 77(5) 6. Schneider and Bowen (1995) "Winning the Service Game”, Harvard Business School Press, Boston 7. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 8. Rynes and Gerhart (2000) "Compensation in Organizations: Current Research and Practice”, Jossey-Bass, San Francisco 9. Robertson, Raymond (2007) "The Together Company”, Cogent Publishing, Huddersfield 10. Heskett, Sasser and Schlesinger (1997) "The Service-Profit Chain”, Free Press, New York 11. Barwise and Meehan (2004) "Simply Better: Winning and keeping customers by delivering what matters most”, Harvard Business School Press, Boston 12. Heskett, Sasser and Schlesinger (2003) "The Value-Profit Chain”, Free Press, New York 13. Schneider, White and Paul (1998) "Linking service climate and customer perceptions of service quality: Test of a causal model”, Journal of Applied Psychology 83(2) 14. Gronroos, C (1990) "Relationship approach to marketing in service contexts: The marketing and organizational behaviour interface”, Journal of Business Research 20 15. Heskett, Sasser and Hart (1990) "Breakthrough Service”, Free Press, New York 16. Hallowel and Schlesinger (2000) "The Service-profit chain: Intellectual roots, current realities and future prospects” in Swartz and Iacobucci (eds) "Handbook of Services Marketing and Management”, Sage, Thousand Oaks, California 17. Reynoso and Moores (1995) "Towards the measurement of internal service quality”, International Journal of Service Industry Management 6
281
Chapter seventeen
282
5/7/07
10:04
Page 282
Involving customers
CHAPTER SEVENTEEN
Involving customers In Chapter 7 we emphasised the importance of involving customers from the outset and explained how to introduce the survey to customers to achieve the best possible response rate. We also emphasised that a key part of the introductory letter is the promise of feedback after the survey and we will cover this important part of the CSM process in this chapter.
At a glance In this chapter we will: a) Explore the debate about whether customers’ perceptions provide a reliable measure of an organisation’s performance. b) Consider the difference between ‘performance gaps’ and ‘perception gaps’. c) Explain how to provide feedback to customers on the results of a customer satisfaction survey.
17.1 Perception or reality? There has been much debate over the years about the extent to which customer satisfaction measures provide an accurate reflection of the organisation’s performance or whether they are subjective judgements on the part of customers that may not reflect reality and therefore should be treated with caution, if not ignored as totally unreliable. The idea of measuring customer satisfaction originally grew out of the quality movement in the USA in the 1980s, when consistency of quality became seen as a key reason why Japanese manufacturers seemed to be more successful than their American competitors1. Quality measures tended to be factual and objective, typically concerned with the extent to which products conformed to specification. This became known as the ‘technical approach’2 to quality management. Over time, however most quality academics and practitioners came to favour the ‘user-based approach’ rather than the ‘technical approach’, on the grounds that the only measure of quality that mattered was the level of quality perceived by the customer2,3. Perceptions are basically mental maps made by people to give them a meaningful picture of the world on which they can base their decisions4. However, as Kotler points out, due to the way people see and remember things, two people may have differing perceptions of the same event or quality level5.
Chapter seventeen
5/7/07
10:04
Page 283
Involving customers
A very early subscriber to the user-based approach was Tom Peters and it was this principle that prompted him to coin his famous phrase6 “customer perception is the only reality”. He emphasised that whilst customers’ judgements may be “idiosyncratic, human, emotional, end-of-the-day, irrational, erratic”, they are the attitudes on which customers everywhere base their future behaviours. As Peters says, the possibility that customers’ judgements are unfair is scant consolation once they have taken their business elsewhere. The notion that due to time constraints most purchase decisions, in business as well as consumer markets, are made on less than perfect knowledge is widely supported in CSM literature. Customers rely on their memory to provide a level of information that makes them feel comfortable when making most purchase decisions7,8,9. As a consequence, it is customers’ perceptions that organisations need to measure and customers’ perceptions that they must attempt to manage. KEY POINT Customers’ perceptions may be “idiosyncratic and emotional” but companies will dismiss them at their peril since they drive customers’ future behaviours. The fact that the organisation’s internal data provides a more accurate reflection of its performance is scant consolation once the customers have defected. This sequence of events was confirmed by AT&T as long ago as the late 1980s. They found that real changes in product quality, as defined by internal quality assurance data, drove subsequent changes in customers’ perceptions of quality with an average three month time lag10. AT&T also demonstrated that changes in customers’ perceptions of quality were followed only two months later by changes in market share.
17.2 Performance and perception gaps In this book we have placed considerable emphasis on the satisfaction gaps that exist when an organisation has not met its customers’ requirements. However, since satisfaction judgements are based on customers’ perception or recollection of events, we can distinguish two types of satisfaction gap – performance gaps and perception gaps. Most satisfaction gaps are performance gaps. For example, customers think the service in the restaurant is slow, and, due to inadequate staffing, it often is very slow. This is a performance gap and the only way to successfully close it is to invest in more staff to produce a real improvement in the speed of service. Sometimes, however, satisfaction gaps will be perception gaps. This typically arises when an organisation has improved its performance but customers have not yet modified their attitudes. If a restaurant has a reputation for mediocre food, it may take several visits before customers revise their perception of food quality after a more skilled chef has been recruited. If it is possible that customers’ perceptions may
283
Chapter seventeen
284
5/7/07
10:04
Page 284
Involving customers
not be accurate or up to date, companies cannot assume that delivering high quality, excellent service and great value will guarantee customer satisfaction and loyalty. It will do so only if that’s how customers perceive it. Suppliers therefore need a twopronged approach to increasing customer satisfaction. Of course, they must deliver high quality, excellent service and great value, but they must also use communications to make sure that’s how the customers see it too. As far as the CSM process is concerned, the main opportunity for influencing customers’ perceptions comes from providing feedback on the survey results. KEY POINT Delivering high quality, excellent service and great value will result in customer satisfaction and loyalty only if that’s how customers perceive it.
17.3 Feedback to customers Informing customers about the CSM results increases interest in the survey but also provides an excellent opportunity to improve customer satisfaction by demonstrating the organisation’s commitment to its customers. When providing feedback on a customer satisfaction survey, companies need to consider three things: Which customers should receive feedback? What information will be communicated? How will it be communicated? 17.3.1 Which customers? At the very least, feedback should be provided to all customers who took part in the survey. If the survey was an anonymous self-completion survey, the identity of the respondents will not be known so targeted feedback to customers who actually took part will not be possible. If an agency has carried out the survey, respondent confidentiality can be assured without anonymity, so the agency would know which customers had responded and should receive feedback. A second possibility is simply to provide feedback to all customers in the sample, whether or not they responded, but if feedback is to be provided to non-respondents in the sample, why not to customers generally? For organisations with a very large customer base the obvious answer is cost. As with internal feedback, the pertinent question is whether the cost can be justified by the benefit. Many organisations fail to realise the potential value of feeding back the CSM results to the entire customer base. Most proactive communications that companies send to customers are selling messages, and are recognised as such by customers, who often have a cynical view about advertising, mailshots, promotions and other forms of marketing. Companies, however, invest huge budgets in marketing communications, often to little effect. Feeding back the results of a customer satisfaction survey provides a rare opportunity to send a different kind of communication to customers. Since it is not a selling message, it is
Chapter seventeen
5/7/07
10:04
Page 285
Involving customers
more likely to engage customers’ attention and interest and, consequently, to drive a positive change in their attitudes about the organisation. KEY POINT Providing CSM feedback to customers is one of the most under-exploited opportunities for improving customer satisfaction. 17.3.2 What information? The starting point is to produce a short feedback report containing the information that will be provided to customers. This should cover four areas, which could be presented to customers in the following way: 1. Why do we survey customers? 2. How is the survey done? 3. What did you tell us? 4. What action are we taking? Why do we survey customers? This short introductory paragraph provides a great opportunity to improve customers’ perception of the organisation’s customer focus by emphasising that it is listening to customers and values their opinions highly. Assuming the survey is conducted on a regular basis, this should be explained and used to demonstrate the fact that continuous customer feedback forms a key input to management decisions. The date of the survey (or period to which the results apply) should also be stated. How is the survey done? If the survey is conducted by independent experts this will enhance its credibility, so is the first point to make in this section. The second is the fact that the questions were based on what’s important to customers, as specified by the customers themselves during a thorough consultation process. Brief details of the exploratory research (e.g. focus groups or depth interviews) should be provided to underpin this second important element in the survey’s credibility. The third factor that builds customers’ trust in the survey results is the representativeness of the sample; information that can be effectively illustrated by pie charts. Finally, the method of survey – e.g. telephone interviews, postal questionnaire, web survey should be briefly stated. What did you tell us? Here the results of the survey should be reported factually and honestly, usually in the form of a clear, simple bar chart of the satisfaction scores. Whilst a truncated x axis scale may be used for internal feedback to emphasise differences, (as in Figures 10.4 and 12.1 for example), the full 1 to 10 scale should be used for the feedback report. This is purely for PR purposes since the wider scale results in longer bars that make the scores look better! As we have said before, it is better internally to make the scores
285
Chapter seventeen
286
5/7/07
10:04
Page 286
Involving customers
look worse since this is more likely to stimulate action to improve satisfaction. Another divergence from internal presentation is that the requirements would be listed in questionnaire order rather than importance order since this will appear more logical to customers. In fact, it isn’t necessary to feed back any information on importance or impact. Customers will be interested in how satisfied other customers are, whether that’s improving and what’s being done about it. Trend information, especially for the satisfaction index, will therefore be very useful for demonstrating improvement. In this respect it is very helpful to communicate the message that the organisation listens to its customers and acts on their feedback. Any specific actions that have been implemented from an earlier survey which help to explain higher satisfaction scores provide powerful evidence of the organisation’s customer-focus so should be highlighted. It’s in this way that the post survey feedback begins to shift customers’ perceptions about the organisation. What action are we taking? The “you told us so we are taking action” theme should be continued in this final, and most essential part of the feedback report. It is helpful to provide as much detail as possible about the actions to be taken and the timescales for implementation. Informing customers about changes that will occur, or, for fast-moving organisations have already happened, will help to ensure that they notice the improvements, modify their perceptions and become more satisfied and loyal. KEY POINT To enhance the organisation’s reputation for customer-focus, emphasise the message that it listens to customers and acts on what they say. 17.3.3 How to communicate it? How the information is provided depends mainly on the size of the customer base. Personal presentation is by far the most effective method and is quite feasible for companies with a small number of key accounts. For a medium sized customer base, a copy of the feedback report should be mailed with a personalised letter. If very large numbers of customers are involved, mass market communications will need to be used. These might include a company newsletter or a brief survey report mailed with another customer communication such as a bill. Retailers and other organisations whose customers visit them can cost-effectively utilise point of sale material. This may include posters, leaflets, display cards or stands. Moreover, customer contact staff can be briefed to enhance the feedback through their verbal communications with customers. Point of sale displays might, for example, encourage customers to ask staff for further details. Even TV advertising has been used to communicate to very large customer bases the survey results and the fact that action is being taken. A very low cost method of boosting customer feedback is to provide it in the form of a web page. This could have links to other parts of the web site and could be
Chapter seventeen
5/7/07
10:04
Page 287
Involving customers
signposted elsewhere using any of the media mentioned above. It is low cost and easy to update. Examples of web and paper feedback reports can be found at www.leadershipfactor.com
17.4 Other communications Although feedback of the CSM results is extremely helpful, it should not be seen as the sum total of the organisation’s efforts to use communications to modify customers’ perceptions. Although people can form negative attitudes quickly following a bad customer experience, they tend to change them slowly, especially when it comes to feeling more satisfied. Consequently, organisations must do everything possible to speed up customers’ attitude change and improve satisfaction by providing regular information about improvements that have occurred. All the channels of communication mentioned in the previous section should be considered and messages should emphasise the theme of listening to customers and acting on their views. It will also be useful to reinforce this principle by reminding customers about any existing information on CSM results, such as a web feedback page and any opportunities for them to express their views such as an email address, toll-free number etc.
Conclusions 1. Satisfaction surveys provide a measure of customers’ perceptions about their customer experience. 2. Whilst perceptions may not always be an accurate reflection of the organisation’s performance, they drive customers’ future behaviour so are the most useful measures to monitor. 3. As well as taking action to improve their performance, organisations should recognise that communications also provide excellent opportunities for improving customer satisfaction. 4. Providing information on the satisfaction survey results and the actions to be taken should be seen as an essential part of the CSM process.
References 1. Schneider and White (2004) "Service Quality: Research Perspectives”, Sage Publications, Thousand Oaks, California 2. Helsdingen and de Vries (1999) "Services marketing and management: An international perspective”, John Wiley and Sons, Chichester, New Jersey 3. Oliver, Richard L (1997) "Satisfaction: A behavioural perspective on the consumer”, McGraw-Hill, New York
287
Chapter seventeen
288
5/7/07
10:04
Page 288
Involving customers
4. Berelson and Steiner (1964) "Human Behaviour: An Inventory of Scientific Findings”, Harcourt Brace Jovanovich, New York 5. Kotler, Philip (1986) "Marketing Management: Analysis, Planning and Control”, Prentice-Hall International, Englewood Cliffs, New Jersey 6. Peters and Austin (1986) "A Passion for Excellence”, Fontana, London 7. Howard and Sheth (1969) "The Theory of Buyer Behaviour”, John Wiley and Sons, New York 8. Webster and Wind (1972) "Organizational Buying Behavior”, Prentice-Hall, Englewood Cliffs, New Jersey 9. Bagozzi, Gurhan-Canli and Priester (2002) "The Social Psychology of Consumer Behaviour”, Open University Press, Buckingham 10. Gale, Bradley T (1994) "Managing Customer Value”, Free Press, New York
Chapter eighteen
5/7/07
10:04
Page 289
Conclusions
CHAPTER EIGHTEEN
Conclusions Customer satisfaction refers to the feelings customers have formed about a customer experience. These feelings are attitudes and they drive behaviours such as Harvard’s 3Rs (retention, related sales and referrals) that are typically called loyalty. The purpose of CSM surveys is not to produce information but to improve customer satisfaction and loyalty. Whilst we have tried in this book to familiarise readers with all the current thinking behind customer satisfaction surveys and the references that underpin it, this level of knowledge goes beyond what most practitioners need in the real world. To successfully use CSM to improve customer satisfaction and loyalty, organisations do need to be very clever with the methodology, only quite clever with the analysis and very simple with the outcomes and recommendations. They must also never forget the power of communications. In this concluding chapter, we will review these critical essentials of CSM and pose a final challenge to readers.
At a glance In this chapter we will: a) Remind readers of the essential elements of a CSM methodology that will accurately reflect how satisfied or dissatisfied customers feel. b) Review the arguments for relating the level of complexity of CSM analysis to the organisation’s progress on its customer satisfaction journey. c) Set some challenges for organisations in the public and private sectors.
18.1 The essentials of an accurate CSM methodology If a CSM process does not provide a totally accurate reflection of how satisfied or dissatisfied customers feel, it is pointless and could even be detrimental to the organisation’s future success. To ensure they don’t fall at the first hurdle, organisations should adhere to all the 10 CSM essentials listed below: 1. To produce an accurate measure of how satisfied or dissatisfied customers feel, surveys must be based on the same criteria the customers use to make that judgement. This means conducting exploratory research to understand the lens of the customer and basing the questionnaire on customers’ most important requirements. 2. Since satisfaction is about the extent to which customers’ requirements have been met, importance as well as satisfaction must be measured.
289
Chapter eighteen
290
5/7/07
10:04
Page 290
Conclusions
3. Stated importance is the only measure of the relative importance of customersâ&#x20AC;&#x2122; requirements. So-called statistically derived measures of importance actually measure impact. 4. The most accurate measure of relative impact is provided by correlation, not multiple regression. 5. Robust samples are essential for reliability. This means at least 200 responses and a response rate of at least 30%. 6. To ensure there is no bias in the results customers should be randomly sampled using a method such as systematic random sampling that produces samples that are representative as well as random. 7. The only rating scale that is suitable for CSM is a 10-point numerical scale. This is partly for its analytical benefits but mainly due to its far superior properties for improving customer satisfaction. 8. Questioning must be neutral and balanced, giving customers as much chance to be dissatisfied as satisfied. 9. Only an index provides a reliable overall satisfaction measure for tracking purposes. The customer satisfaction index is based on customersâ&#x20AC;&#x2122; most important requirements weighted for relative importance. 10. A loyalty measure should also be based on a composite index from several loyalty questions but would not normally be weighted. These 10 essential steps to a reliable measure of customer satisfaction apply to all organisations, but when it comes to analysis the picture is more complicated, as explained in the next section.
18.2 Complexity of analysis The level of complexity that organisations need to use when analysing CSM data depends on two factors; how satisfied their customers are and how long theyâ&#x20AC;&#x2122;ve been measuring customer satisfaction. Of these, the former is much more important. 18.2.1 Level of customer satisfaction The less satisfied customers are, the more simple analysis can and should be. Organisations with a customer satisfaction index below 75% (calculated according to the method specified in chapter 11), should keep customer satisfaction surveys very basic and action-focused. The only reason for such poor customer satisfaction is that one or more very important, and often basic, customer requirements are not being met. The organisation is simply not doing best what matters most to customers. The causes of this problem will be obvious from the simple gap analysis explained in Chapter 12. However long the organisation has been conducting customer satisfaction surveys, no amount of sophisticated statistical analysis will change this basic fact. All available time, effort and resources should be directed to addressing the largest satisfaction gaps (three at most), rather than indulging in pointless
Chapter eighteen
5/7/07
10:04
Page 291
Conclusions
examination and debate of detailed data or clever statistical analysis. Full utilisation of the techniques outlined in chapters 16 and 17 to involve employees and customers will also be very helpful. 18.2.2 The maturity of the CSM process If organisations do take effective action on their PFIs and start doing best what matters most to customers, their customer satisfaction levels will improve. Consequently, many organisations find that as their CSM process matures, they have addressed the obvious satisfaction gaps, customer satisfaction has improved, and it is becoming increasingly difficult to improve it further. This is normal. As we pointed out in Chapter 11, the higher customer satisfaction is, the more difficult it becomes to improve it, so targets have to be reduced as satisfaction increases. Organisations in this situation will also typically find that the PFIs become less obvious, average scores and overall indices provide insufficient granularity and it starts to become difficult to recommend actionable outcomes. This is when the analysis of CSM data needs more time and sophistication. Survey outcomes that are particularly useful to organisations at this stage include: Customer experience modelling (CEM) As we explained in Chapter 15, this technique is very helpful for isolating highly specific and tangible actions that can be implemented by the organisation as well as for demonstrating progress as the actions start to make a difference to customer satisfaction. CEM can also be used to identify the effect of improvements on customersâ&#x20AC;&#x2122; loyalty as well as on their overall satisfaction. Internal benchmarking In the authorsâ&#x20AC;&#x2122; experience, one of the main factors explaining the success of organisations with very high levels of customer satisfaction is their effective use of internal benchmarking. When customer satisfaction is very high, company-wide action on PFIs is often wasteful since many parts of the organisation will already be meeting or exceeding customersâ&#x20AC;&#x2122; requirements in those areas. It is therefore more effective to focus all efforts and investment on improving the performance of business units, stores, branches etc with the lowest customer satisfaction. To make this work, internal customer satisfaction league tables are necessary to focus the attention of managers in the poorer performing units. Also necessary is a very positive culture, demonstrating that the league tables are about opportunities for improvement, not about blame or naming and shaming. The culture must also ensure that units with high customer satisfaction will not guard their secrets but will gain reward and recognition through sharing them with less successful colleagues, e.g. through a customer satisfaction mentoring scheme. Satisfaction enhancers When organisations have very high levels of customer satisfaction, it is very likely that they are meeting all fundamental customer requirements. They will already be doing best what matters most to customers. As explained in Chapter 14, there is
291
Chapter eighteen
292
5/7/07
10:04
Page 292
Conclusions
often little return on investment from improving satisfaction maintainers beyond a good level, so companies with very high customer satisfaction may need to focus on achieving exceptional performance on satisfaction enhancers such as ‘helpfulness of staff’ or ‘treating me as a valued customer’. Drilling down Companies with particularly high levels of customer satisfaction and a mature CSM process will need to sharpen their focus even more. Using the asymmetric analysis explained in Chapter 14, they need to focus actions on where the best returns can be made. At very high levels of satisfaction, average scores and an overall index lose their utility – they are always very good, but the averages often mask the fact that the company is not totally consistent in delivering great customer experiences. Asymmetric analysis enables companies to target customers at a particular level of satisfaction, such as moving those in the ‘zone of indifference’ scoring 7s and 8s into the ‘zone of affection’, where they score 9s and 10s for satisfaction and are much more loyal. Alternatively, it could mean focusing on a specific demographic segment, a group of customers with unique requirements that are not being fully met or a behavioural segment such as customers who transact with a certain frequency or via a specific channel. Competitive advantage Occasionally companies have succeeded in making customers highly satisfied, but their market is so competitive that customers are extremely promiscuous, spreading their category spend across several suppliers. They may even be loyal to more than one of the competing suppliers. In these circumstances companies need detailed information on customers’ attitudes about all their main competitors. As described in Chapter 13, actions will often be focused on customer requirements where they under-perform competitors rather than the areas of lowest customer satisfaction. Decision tree analysis is often a particularly useful aid to targeting in these circumstances.
18.3 The challenges 18.3.1 Challenges for the public sector One of the strongest conclusions from the authors’ many years of involvement in customer surveys is that for most organisations customer satisfaction isn’t a real priority. They talk the talk very well but don’t walk it. In the public sector especially, measures abound, but they’re typically poor to useless. Their real purpose is to satisfy Government or regulators, not to improve customer satisfaction. This is evidenced by the frequent use of 5-point verbal scales and headline measures based on percentage satisfied, making mediocre performers look good or very good when results are reported. Yet they are very far from good. Most organisations in the public sector have very low levels of customer satisfaction. This is evidenced by an Institute of Customer
Chapter eighteen
5/7/07
10:04
Page 293
Conclusions
Service study (the pre-cursor to the UKCSI), which showed that local councils had by far the lowest levels of customer satisfaction across the ten sectors measured1. The challenges for the public sector are: 1. Adopt a consistent CSM methodology across the public sector instead of the plethora of incomparable measures that currently exist. 2. Have the guts to use a tough measure that accurately reflects how satisfied or dissatisfied customers feel, especially the use of a 10-point scale and basing the questions on the lens of the customer. 3. Abandon the obsession with the minutiae in the data and focus instead on action to address the obvious PFIs. 18.3.2 Challenges for the private sector Even in the private sector where there is plentiful evidence of the vastly superior profitability of highly satisfied customers, few companies go more than a perfunctory extra mile to achieve it. Very few follow all the customer satisfaction essentials listed above and even fewer put their money where their mouth is and include any element of customer satisfaction related pay in an employees’ reward package. Virtually none makes any serious attempt to link customer satisfaction to the company’s financial performance, which would, of course, demonstrate its benefits to everyone in the organisation. The challenges for the private sector are: 1. Since the goal is to improve customer satisfaction and loyalty rather than measure it, never forget the Harvard dictum that dissatisfaction with the status quo is an essential pre-cursor to change. So, resist the temptation to be popular by announcing good customer satisfaction news supported by great comments from satisfied customers. Instead focus the methodology on producing the maximum help to satisfaction improvement efforts. Have a tough measure on a 10-point scale, benchmark your company against the best, not just your own sector and study the comments from dissatisfied customers to understand how to improve. 2. Take the long term view, not the short term one. Customer loyalty is very hard won but its value builds over time. Cost cutting may boost the bottom line now, but if there is any negative impact on customers it will come back to bite you. 3. Have the guts to use internal benchmarking in a big way. Don’t let the poor performers on customer satisfaction hide. Publish high profile league tables sponsored from the top. Use them for reward and recognition. It will make managers take notice. But use them positively, saving the best rewards for the league leaders who coach and share secrets with the poor performers. 4. If you’re in the top quartile, and especially if you’re in a very competitive market, the best returns will come from the highest levels of customer satisfaction. It will therefore pay to invest heavily in the more advanced
293
Chapter eighteen
294
5/7/07
10:05
Page 294
Conclusions
methodologies covered in Chapters 13-15 of this book. Reduce the focus on average scores and the overall index and use asymmetric analysis and decision tree analysis to pinpoint the best customers to target, CEM to fine tune action implementation and competitive analysis to defend your own vulnerable customers and to identify competitors’ customers who are most likely to be attracted to your own company’s strengths. 5. Be consistent. Many companies could advance from good to great on customer satisfaction just by being more consistent. Drilling down into their results shows that most customers are highly satisfied but some are much less satisfied or that most customer experiences are excellent but a few have a very poor experience. Clearly, the company has the systems, processes and people to achieve very high levels of customer satisfaction on average, but it doesn’t always happen. Strong focus on the customers giving low or below target scores with extensive use of CEM to pinpoint and eliminate the behaviours causing these problems will move customer satisfaction from good to great and boost profits too. 6. Top management must be the biggest champions of customer satisfaction. What is seen to matter most at the top will drive the behaviours at all levels through the organisation. Points 7 and 8 are great ways for top management to demonstrate how important customer satisfaction is to the company. 7. Don’t just reward the managers. Include an element of customer satisfactionrelated pay for all employees. 8. Invest in producing an accurate measure of Customer Lifetime Value. Calculate precisely how it relates back to customer satisfaction and forwards to profitability. Make it the cornerstone of the company’s growth strategy. This will almost always show that highly satisfied customers are very profitable but that less satisfied ones (probably due to the inconsistency highlighted in point 5) are very costly to service. As well as reducing the high costs of customer dissatisfaction, CLV enables companies to do best what matters most………to those who matter most. 18.3.3 Challenge the authors Whether you are offended by our last section, intrigued by any parts of this book or even inspired to take action, you may want to air your views, challenge the authors or debate with other like-minded professionals. If so, this book’s website, www.customersatisfactionbook.com is the place for you. It also provides links to other useful customer satisfaction sources such as events and training courses as well as news and updates about customer satisfaction and loyalty.
References 1. ICS Breakthrough Research Report (2006) “Customer Priorities: What customers really want”, Institute of Customer Service, Colchester
Glossary
5/7/07
10:05
Page 295
Glossary
Glossary Ambiguous questions A question which may confuse respondents, or which they may understand in a different way to that intended. For example, ‘which newspapers do you read regularly?’ – the meaning of the word ‘regularly’ is unclear. Attitudinal questions Questions that seek to understand attitudes, motives, values or beliefs of respondents. Average Correctly termed arithmetic mean. Baseline survey Comprehensive customer survey carried out periodically to establish or update key benchmarks such as customers’ priorities and organisational performance. Behavioural questions Questions that are concerned with what people do, as opposed to what they think. Bivariate analysis The analysis of the relationship between two variables – e.g. correlation. Census A survey of the entire population. Classification questions Used both for sampling and analysis, they serve as a check that the sample is representative (for example in terms of gender, age and social grade) and also form the basis of breakdown groups for cross-tabulations. Closed questions Questions to which respondents are asked to reply within the constraints of defined response categories. Code of Conduct The MRS Code of Conduct (available on http://www.mrs.org.uk/standards/guidelines.htm) consists of a set of rules and recommendations adhered to by the members of the society. The code prevents research being undertaken for the purpose of selling, and covers issues of client and respondent confidentiality.
295
Glossary
296
5/7/07
10:05
Page 296
Glossary
Coding The process of allocating codes to answers in order to categorise them into logical groups. For example if the question was ‘why are Xyz. the best supplier?’ coding might group answers under ‘Product quality’, ‘Service quality’, ‘Lead times’ etc. Collinearity A data condition that arises when independent variables are strongly related and is a problem when building regression models, leading to unstable beta coefficients. Approaches to counter this problem include factor analysis and ridge regression. Confidence interval The range either side of the sample mean within which we are confident that the population mean will lie. Usually this is reported at the 95% confidence level, in other words we are sure that if we took a 100 similar samples then the mean would fall into this range 95 times. Or more simply, we are 95% sure that the population mean falls in this range. Consumer markets Markets where the purchase is made by an individual for his or her own consumption or for the consumption of family, friends etc. Convenience sample Sample selected merely because it is convenient; such as samples liable to bias. Correlation When correlating two variables we measure the strength of the relationship between them. The correlation coefficient is in the range –1 to +1, with the absolute value indicating the strength. A negative coefficient indicates an inverse relationship (i.e. as one goes up the other goes down), 0 indicates no relationship and a positive coefficient indicates a positive relationship. In CSM we would only expect to find positive coefficients. The most common type of correlation is Pearson’s Product Moment. Creative comparisons A projective technique in which respondents are asked to liken an organisation to something (frequently a car or an animal) and give reasons, which is what the researcher is interested in. For example: ‘If Xyz was a car, what kind of car would it be? Why?’ – “A Ford Mondeo, because it does its job, but it’s unexceptional, there are lots of others that would do just as well.” CSM Acronym for Customer Satisfaction Measurement.
Glossary
5/7/07
10:05
Page 297
Glossary
Customer Loyalty Customer loyalty has been achieved when an organisation is the preferred supplier for a customer, when the customer values his/her relationship with the organisation and enjoys dealing with it and when the customer is prepared to go out of his/her way to recommend and use the supplier. Customer satisfaction Index (CSI) A customer satisfaction index is the best headline measure of overall satisfaction. It is an average satisfaction score weighted according to the importance customers place on its component requirements. As a composite measure it is more sensitive and more reliable than any single measure. Customer Satisfaction Measurement Customer satisfaction measurement is a measure of how your organisation's "total product” performs in relation to a set of customer requirements.” …. In other words - are you delivering what customers want? Dependent variable A variable that is assumed to be explained by a number of items (independent variables) also measured. ‘Overall satisfaction’ is the usual dependent variable in CSM. Depth interview A loosely structured, usually face-to-face interview used in exploratory research in business markets, or if the subject matter is considered too sensitive for focus groups. Derived importance Derived importance is based upon the covariation between an outcome variable and a predictor variable. It is usually established by correlation or multiple regression. Desk research Research into secondary data, for example Mintel reports. Diagrammatic scale Also known as a graphic scale, a form of scale without numerical or verbal descriptors but which uses pictures, lines or other visual indicators. Discussion guide The document used by the moderator of a focus group as the equivalent of an interview script, though it is much less structured and prescriptive. DMU (Decision making unit) A group (formal or informal) of individuals involved in a purchasing decision.
297
Glossary
298
5/7/07
10:05
Page 298
Glossary
Double questions Questions which have more than one aspect, for example ‘were the staff friendly and helpful?’ – what if the staff were friendly but not helpful? ESM Acronym for Employee Satisfaction Measurement. Exploratory research Research undertaken prior to the main survey in order to gain understanding of the subject. In CSM exploratory research should be used to understand what customer requirements are. Face to face interview An interview conducted in person, often at the respondent’s home or office or in the street. Facilitator See Moderator. Factor analysis Used to examine relationships in a set of data to identify underlying factors or constructs that explain most of the variation in the original data set. Factors are usually uncorrelated or weakly correlated with each other. Factor scores can be calculated and used in order to eliminate the problem of collinearity in data and reduce the number of variables. Feedback Communicating the results of the survey – usually both internally and outside the organisation. Focus group A mainstay of qualitative research, used at the exploratory stage. A group of around eight people is guided in a discussion of topics of interest by a trained facilitator/moderator. Used for exploratory CSM in consumer markets. Friendly Martian A projective technique in which respondents are asked to advise a friendly alien on the process of interest (say getting a meal at a restaurant), covering all the things he should do, what he should avoid and so on. Since the Martian has no assumed knowledge the respondent will include things that are normally taken for granted.
Glossary
5/7/07
10:05
Page 299
Glossary
Gap analysis Achieved by subtracting satisfaction scores from importance scores to reveal where satisfaction is most falling short of requirements. Requires interval-level data. Group discussion See Focus group. Independent variable One of a battery of questions assumed to explain variance in an ‘outcome’ variable such as overall satisfaction – with CSM data these are usually individual requirements such as ‘product quality’. Internal benchmarking Data gathered internally and used to quantify and monitor aspects of service performance such as delivery reliability. Interval data Numerical scales whose response options are equally spaced, but there is no true zero – e.g. the Celsius scale, the ten-point numerical scale. Item A question on the questionnaire. Kruskal’s relative importance One measure of relative importance. Produces the squared partial correlation averaged over all possible combinations of the predictor variables in a regression equation. Computationally very intensive. Latent Class Regression LCR allows us to identify homogenous subsets of people in the data who form opinions in the same way, and build separate regression equations for each of these groups. A very young technique that promises to revolutionise the way models are built, not as yet unproved. Latent variable A variable of interest that cannot be directly measured (for example intelligence) but has to be estimated through procedures such as factor analysis applied to a number of manifest variables deemed to be ‘caused’ by the latent variable (e.g. reading speed, exam results, etc…). Usually form the basis of Structural Equation Models. Leading questions A question that is prone to bias respondents to answer in a particular way, often positively. For example, ‘how satisfied were you…’ as opposed to ‘how satisfied or dissatisfied were you…’.
299
Glossary
300
5/7/07
10:05
Page 300
Glossary
Likert scale A scale running from ‘Strongly agree’ to ‘strongly disagree’ on which respondents rate a number of statements. These should be a combination of positive and negative statements to avoid bias. Linear regression See Regression, assumes that the relationship between variables can be summarised by a straight line. Mean The most common type of average – the sum of scores divided by the total number of scores. Median The central value in a group of ranked data – useful for ordinal-level data. On some occasions the median may be a ‘truer’ reflection of the norm than the mean – for instance average income is usually a median, since the mean is distorted by a few people with very large salaries. Mode The most commonly occurring response. Moderator The researcher leading a focus group. MRS The HYPERLINK "http://www.mrs.org.uk" Market Research Society – the professional body for market researchers in the UK. Implements the Code of Conduct by which most researchers abide and offers professional qualifications. Multidimensional scaling (MDS) This can be thought of as an alternative to factor analysis. In a similar way it aims to uncover underlying dimensions in the data, but a variety of measures of distance can be used. A common example is to take a matrix of distances between cities (such as that found at the front of a road atlas). Using MDS an analysis in two dimensions would produce something very similar to a map. Multiple regression An extension of simple regression to include the effects of more than one predictor on an outcome variable.
Glossary
5/7/07
10:05
Page 301
Glossary
Multivariate analysis The analysis of relationships between several variables – e.g. factor analysis. Mystery shopping Also called ‘mystery customer research’ in business-to-business markets. Involves collection of information by posing as ordinary customers. Nominal data Scales that only categorise people, but have no logical ordering – e.g. Male/Female. Non-response bias A major potential source of bias, particularly in postal surveys, in that responders’ opinion may differ from non-responders. For example it is typically those with extreme opinions who respond, or those who feel most involved with your organisation. Normal distribution Graphically represented as a bell curve. Most data has a tendency to fall into this pattern, with people clustering around the mean. The shape of this curve for a variable can be calculated from the mean and standard deviation. The characteristics of the normal distribution are that 68% of scores will be within 1 standard deviation of the mean and 95% will be within 2 standard deviations. This tendency is the basis of assumptions used in confidence interval estimation and hypothesis testing. Numerical scale A scale for which each response option has a numerical descriptor, commonly 1-5, 1-7 or 1-10. The endpoints are usually anchored to provide a direction of response, for example ‘completely dissatisfied’ and ‘completely satisfied’. Open questions Questions where the respondent’s reply without explicit response categories. These are either coded at the time of interview into existing categories or post-coded. Ordinal data Response categories can be placed in a logical order, but the distance between categories is not equivalent – e.g. Very likely – quite likely – not sure – quite unlikely – very unlikely. Outcome variable See Dependent variable.
301
Glossary
302
5/7/07
10:05
Page 302
Glossary
Part correlation See Semipartial correlation. Partial correlation The correlation between two numerical variables having accounted for the effects of other variables. This could be used to assess the independent contribution to overall satisfaction of ‘staff friendliness’ having removed a similar variable such as ‘staff helpfulness’. PFIs (priorities for improvement) Those areas where improvements in performance would make the greatest contribution to increasing customer satisfaction. Pilot surveys A survey conducted prior to the main survey using the same instrument, used to assess the questionnaire for potential problems such as respondent confusion or poor routing of questions. Population The group from which a sample is taken, e.g. all of an organisation’s customers for CSM. Postal survey Any survey in which the questionnaire is administered by post. A mail survey in American usage. Post-coding Coding the answers to a question after the survey is complete. Pre-coding The process of determining in advance the categories within which respondents’ answers will fall. Predictor variable See Independent variable. Primary data Data collected specifically for the question of interest – the CSM survey produces primary data. Probability sampling See Random sampling.
Glossary
5/7/07
10:05
Page 303
Glossary
Probing A prompt from the interviewer to encourage more explanation or clarification of an answer. These do not suggest answers or lead respondents but tend to be very general: ‘Anything else’, ‘In what way?’, or even just sounds such as ‘uh-huh’. Product What is sold. It encompasses intangible services as well as tangible goods. Projective techniques Common in qualitative research, these are a battery of techniques that aim to overcome barriers of communication based on embarrassment, eagerness to please, giving socially-acceptable answers etc. Examples include theme boards, the ‘Friendly Martian’ and psychodrama. Psychodrama A projective technique also known as role playing. Participants are assigned roles and asked to improvise a short play. Qualitative research Research that aims not at measurement but at understanding. Sample sizes are small and techniques tend to be very loosely structured. Techniques used include focus groups and depth interviews. Quantitative research Research that aims to measure opinion in a statistically valid way, where the limits to the reliability of the measures can be accurately specified. Used at the main survey stage in CSM. Quota sampling A form of non-random sampling in which quotas are set for certain criteria in order to ensure that they are represented in the same proportions in the sample as they are in the population – for example a simple quota might specify a 40%-60% male-female split. Random sampling Every member of the population has an equal chance of being selected. Ratio data A scale that has a true zero – e.g. the Kelvin scale. You are unlikely to come across this type of data in CSM work.
303
Glossary
304
5/7/07
10:05
Page 304
Glossary
Regression A model that aims to assess how much one variable affects another. This is related to correlation, but implies causality. Requirement A single satisfaction/importance question. Response rate The number of admissible completed interviews, normally represented as a percentage of the number invited to participate. Routing Instructions to an interviewer (or respondent in self-completion questionnaires), usually directing them to the next question to be answered based on their previous responses. Sample The people selected from the population to be interviewed. Sampling Process of selecting a part, or subset, of a target population to investigate the characteristics of a population at reduced cost in terms of time, effort and money. A sample must therefore be representative of the whole. Secondary data Data that already exists, for example government statistics. Self-completion questionnaire A questionnaire that is completed by the respondent rather than by an interviewer. Usually postal surveys, though recent innovations allow Web or email surveys to be used. Semipartial correlation The correlation between two variables with the effects of other variables removed from the predictor variable only. SIMALTO scale Acronym for Simultaneous Multi-Attribute Trade-Off. A complex scale that requires respondents to rate their expected, experienced and ideal levels of performance on a variety of key processes. Requires the presence of a skilled interviewer to be reliably completed.
Glossary
5/7/07
10:05
Page 305
Glossary
Social grade The most common (though now somewhat dated) means of classifying respondents according to socio-economic criteria, based on the occupation of the chief income earner in a household. Classes are A, B, C1, C2, D and E, though these are often grouped into four: AB, C1, C2, DE, or even two: ABC1 and C2DE. Standard deviation The square root of the variance. It can be taken as the average distance that scores are away from the mean. It gives us vital information to reveal the pattern of scores lying behind a mean score. Stratified sampling The population is divided into subgroups of interest and then sampled within these groups. This could be used to ensure that the sample is representative of the relative size/value of the subgroups. Street interview A face-to-face interview conducted in the street or other public place. Structural Equation Modelling (SEM) A close relation of Confirmatory Factor Analysis, this is a powerful technique for hypothesis testing, implemented through specialist software such as LISREL and AMOS. It is a state-of-the-art and very rigorous technique for testing models. Sum The total of all the values for a question. Systematic random sampling Divide the population by the required sample size (e.g. 4000/400 = 10) choose a starting point at random and then select every nth (e.g. 10th) person for interview. Theme board A projective technique involving the use of collages of pictures mounted on card to act as a starting point for a discussion among focus group participants. Pictures might vary from illustrative to metaphorical. Total product Encompassing the entire range of benefits that a company/organisation provides when the customers make a particular purchase. In addition to the core product it may include added value benefits such as guarantees, fast delivery, free on-site maintenance. Tracking Repeated surveys using the same basic questionnaire either continuously or at regular intervals to identify changes in respondentsâ&#x20AC;&#x2122; perceptions.
305
Glossary
306
5/7/07
10:05
Page 306
Glossary
Unbalanced scale A scale with unequal numbers of positive and negative response categories, leading to a bias in responses. An example is “Excellent” – “Good” – “Average” – “Poor”. Univariate analysis The analysis of a variable on its own – e.g. mean score, variance. Variance A measure of the amount of diversity or variation in the scores received for a question. The analysis of variance is key to many statistical measures of association. Verbal scale Any scale for which answers are given according to a range of phrases or words, as opposed to numerical or diagrammatic scales. The Likert scale is a common example. Weighting Process which assigns numerical coefficients (weights or weighting factors) to each of the elements in the set, in order to provide them with a desired degree of importance relative to one another.
index
5/7/07
10:06
Page 307
Index
Index
accuracy 17, 35, 38, 61, 65, 69, 71, 76, 77, 87, 88, 104, 117, 176, 178, 183, 197, 221, 255, 256 acquisition 9,19, 20, 181, 216-218, 221, 222, 224 action mapping 275 actionability 6, 113, 138, 189, 250, 251, 255, 256, 258, 260 aggregating data 116 alternative suppliers 4, 215, 222 ambiguous questions 143 annual bonus 277 anonymity in surveys 86, 88, 94, 102104, 127, 147, 280,284 ACSI (American Customer Satisfaction Index) 18, 21, 22, 23, 29, 120,123, 124, 194 asymmetry 227, 229 attitudes 215, 250, 251, 254, 255, 283, 285, 292 attitudes and behaviours 4, 14, 32, 217 attitudes -role in behaviour buying 206, 217, 230, 240, 242, 261, 287, 289 attitudinal questions 134 attractive quality 226, 228, 229, 233, 247 available customers 222 average 46, 47, 77, 84, 86 averages 150, 292
base profit 19,20 baseline survey 253 behavioural questions 63 beliefs 217 benchmarking 29,121,166,185, 192196, 252 benchmarking - internal 198, 199, 272274, 291,293 benchmarking - league tables 34, 121, 291,293 benefits 37, 38, 39, 45, 83, 87, 90, 97, 99, 100, 123, 166, 167, 181, 196, 209, 210, 211, 212, 216, 218, 221, 222, 223, 224, 232, 234, 238, 240, 241, 256, 274, 290, 293 beta coefficients 541 bias 381 bias - attitudinal 86 bias - interviewer 86, 94 bias - non-response 82,84-87, 102 bias - positively biased rating scales 145, 146 bias - question induced 38 bivariate techniques 50-54 blame 272, 291 boosting response 87-91, 204 business impact 191, 192, 196, 198, 264 business to business markets 58, 69, 73, 75, 76, 92, 273
307
index
308
5/7/07
10:06
Page 308
Index
call backs 94, 95 Canadian Imperial Bank of Commerce 20 CAPI 91 Cellnet 1, 9 census surveys 78 challenges 93, 289, 292, 293 Chelsea Football Club 88 clarity of reporting 163, 185, 196, 198 classification questions 132, 138-140, 142,147, 148, 279 cliff edge 133, 246 closed questions 128, 142 closing the questionnaire 147 coding 164, 257 collinearity 51, 52, 53, 54, 55, 153 commitment 16, 27, 41, 62, 100, 101, 131, 132, 133, 134, 180, 184, 215, 217, 225, 284 communcations - employee 273 communications 216, 269, 277, 287, 289 communications - customer 284, 286, 287 comparison 56, 63, 109, 163, 192, 198, 199, 201, 202, 203, 204, 212, 223 competition 34, 41, 209, 233, 241 competitive positioning 209 competitor gap 207 competitor matrix 208 complaints 4, 6, 11, 12, 15, 20, 32, 102, 137, 172, 219, 261, 265, 272 concise information 196,252 conclusions 14, 15, 23, 26, 36, 39, 43, 55, 67, 69, 79, 96, 98, 103, 107, 111, 123, 148, 164, 182, 185, 187, 197, 198, 199, 208, 222, 223, 226, 227, 237, 238, 239, 246, 251-254, 267, 280, 287, 289-294 confidence intervals 38, 71, 168, 169, 176-179, 183, 197, 219, 253 confidence level 175, 176, 178, 179, 183
confidentiality 65, 81, 88, 101, 102, 103, 104, 107, 269, 284 Conjoint Analysis 61 consulting customers 1, 11, 14, 97 consumer markets 57, 67, 93, 96, 99, 104, 107, 283 continuous tracking 105, 108, 250, 252, 253, 262, 267, 272 convenience samples 74 core questions 97, 107 correlation coefficient 48-51, 153 correlation matrix 51, 52 cost savings 19, 20 creative comparisons 63, 64 credibility 71, 127, 164, 185, 253, 277, 278, 285 cross-tabulations 219 culture 12, 24, 25, 41, 232, 268, 276, 278, 280, 291 customer behaviour 4, 32, 183, 206, 212 customer comments 92, 96, 129, 161, 254, 256, 273 customer decay 245, 246 customer expecations - alternative suppliers 2, 3, 6, 15 customer expectations 186 Customer Experience Modelling (CEM) 251, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 267, 291, 294 customer lifetime value 3, 18,19, 23, 26, 33, 135,167, 180, 181, 182, 183 customer perception - internal surveys 279 customer perception - methodology 10, 205 customer perception - public relations aspect of 78, 81, 84, 85, 92, 94, 96-98, 100, 103, 105-107, 204, 252 customer perception - sampling 183 customer perception survey 263, 281
index
5/7/07
10:06
Page 309
Index
customer perceptions 2, 3, 16, 19, 27, 218, 224 customer perceptions - analysis and reporting of results 290 customer perceptions - attitudinal questions 128 customer requirements 37, 46, 51, 52, 60, 61, 64, 66, 125, 131, 132, 139, 140, 146, 147, 148, 152, 154, 158, 169, 173, 174, 189, 190, 191, 198, 199, 205, 208, 212, 214, 224, 229, 231, 236, 238,239, 247, 253, 254, 262, 269, 273, 279, 280, 290, 291, 292 customer value map 211 Data Protection Act 78, 81, 84, 85, 92, 94, 96-98, 100, 103, 105-107, 204, 252 deciders 59 decision making 7, 17, 38, 56, 58, 60, 98, 102, 106, 118, 123, 219, 222, 234 Decision Making Units (DMUs) 58-60 decision tree analysis 219, 220, 221, 222, 224, 292, 294 defect 3, 4, 6, 15, 248 delight 2, 3, 128, 196, 229, 230, 232, 234, 236, 239, 240, 243, 245, 247 Dell 22 depth interviews 37, 57, 58, 61, 67, 92, 212, 285 derived importance 47, 50, 152-154, 171 determinance 47 Deutsche Telekom 10 dichotomous questions 129, 137, 138, 256, 257 differentiators (in customer satisfaction) 8, 47, 150, 161, 162, 165, 191, 192, 196, 198 dissatisfaction 2, 9,12, 32, 35, 86, 93, 94, 103, 108, 118, 128, 133, 136, 137, 140, 146, 150, 159, 160, 164, 165, 172, 190, 191, 192, 196, 198, 229, 230, 232, 242, 256, 264, 293, 294
DMU - as sampling variable 74, 75, 143 DMU - personnel 75 DMUs 140, 146, 150 doorstep interviews 91 double counting 52, 53, 54, 136 double questions 138,144 drawing conclusions 69, 208, 222, 239 drilling down 78, 159, 251, 292, 294 electronic surveys 81-83, 85, 86, 107 employee satisfaction 18, 20, 21, 25, 26, 88, 119, 269, 278 enhancers 226, 230-235, 237, 238, 247, 291, 292 Enterprise Rent-A-Car 35 European Union 13 event driven 105 executive summary 1 exit interviews 11, 91 expectation scales 110, 117 exploratory research 36, 57-67, 81, 98, 125, 126, 139, 140, 144, 148, 152, 155, 169, 172, 173, 205, 212, 214, 228, 252, 255, 274, 277, 279, 280, 285, 289 face to face interviews 58, 91-95, 148 facilitator 62, 63, 64 feedback 11-14, 43, 87, 99-101, 106, 108, 110,127, 139, 252, 260, 262, 263, 271, 272, 273, 274, 277, 280, 282, 284287 feedback reports 287 fieldwork 252 financial performance 3, 6, 13, 19, 20, 22, 182, 240, 293 five point scale 118-121, 167 flirtatious customers 216, 222 flow chart 259, 260 focus groups 37, 57, 58, 61-64, 67, 212, 285 free markets 18, 23 frequency of measures 277 Friendly Martian 64
309
index
310
5/7/07
10:06
Page 310
Index
gatekeepers 59 Gateway 22 GDP 24-26 givens (in customer satisfaction) 47, 55, 117, 150, 154, 155, 165, 233, 240 halo effect 105 handling problems and complaints 6, 137, 261, 272 Harvard Business Review 37 headline measure 7, 10, 40, 45, 67, 113, 121, 166-170, 182, 183, 252, 253 hot alert system 103, 104, 108 Hyundai 22 image 25, 99, 171, 204, 212 impact 15, 40, 44, 47, 48, 49, 50, 54, 55, 56, 63, 65, 66, 67, 76, 79, 87, 90, 92, 94, 137, 150, 152, 153, 154, 155, 164, 165, 170, 171, 175, 182, 189-192, 196, 198, 199, 206, 207, 211, 212, 228, 230, 232, 240, 241, 247, 254, 260-265, 275, 277, 286, 290, 293 importance 43-56 importance - derived 47, 50, 152, 153, 154, 171 importance - stated 46, 47, 50, 55, 56, 65, 66, 153, 154, 171, 172, 182, 290 improving satisfaction 7, 292 incentives 62, 89, 90, 107, 108, 109 incomplete measures 11 indirect questionning 59 indirect questions 60 influencers 59, 68 insurance company 8, 180, 218 internal benchmarking 198, 199, 272274, 291, 293 internal metrics 11, 12, 137, 234, 235, 247 interviews - personal 91, 92, 95, 96, 97 interviews - telephone 35, 86, 91-97, 107, 115, 123, 148, 223, 273, 277, 279, 285 intranet 82, 84, 269, 273, 279
introducing the survey 98,99 introductory letters 88 intrusion 14 intuitive judgement 223 investing 5, 7, 9, 246, 253, 260, 267 invitation to customers to participate in surveys 83, 84 involving customers 282-288 IVR 82, 83 jargon 142, 144, Kano (model) 228, 229, 230, 233, 234, 247, 248 key drivers 55, 56 lagging measures 11, 15 late responses 85 latent class regression 221, 222 legal issues 104 lens of the customer 37, 39, 43, 45, 46, 55, 56, 57, 61, 66, 67, 126, 132, 135, 169, 170, 183, 193, 194, 199, 205, 228, 250, 251, 252, 255, 267, 274, 289, 293 lens of the organisation 38, 43, 66, 126, 132, 137, 148, 180, 255, 256 Likert scales 112, 113 linear 6, 15, 33, 123, 226, 227, 229, 235, 236, 237, 238, 243, 246, 247, 248 logging data 264 long haul 253 long term 19, 24, 34, 82, 100, 128, 223, 232, 253, 273, 274, 275, 293 low response 10, 39, 79, 82, 84, 107 loyalty differentiators 150, 161, 162, 165, 191, 198 loyalty index 134, 135, 161, 166, 179, 180, 215, 220, 244 loyalty myths 13, 17, 249 loyalty schemes 216 loyalty segmentation 222 maintainers 226, 230-235, 237, 238, 240, 241, 245, 247, 292 Manchester United 88
index
5/7/07
10:06
Page 311
Index
margin of error 38, 70, 71, 76, 78, 115, 168, 175, 176, 177, 178, 183 Market Research Society 17, 68, 80, 104, 108, 127, 147 market standing 201, 205, 206, 211, 212, 214, 224 maximising response rates 86 MBNA 23, 170, 216, 277 measurement error 70, 71, 76, 167, 168, 169, 182 measuring impact 48 median 150, 151, mid-point 110, 115, 116, 117, 151, 155, 156, 167 mirror effect 25 mirror survey 256, 268, 269, 270, 271, 280 mixed methods 96, 97, mode 150, 151 moderator 62 monitoring 1, 3, 10, 11, 22, 38, 39, 71, 110, 113, 115, 117, 120, 121, 123, 137, 160, 166-183, 258, 262, 267, 281 multiple choice question 129 multiple regression 50,52-55, 153, 233, 290 multivariate techniques 52, 111, 113, 119, 122 mystery shopping 11, 12, 13, 15 NASDAQ 21 net promoter score 7, 132,179, 252 non-linear 6, 227, 238, 243, 248 non-parametric measures 111 normal distribution curve 77 not-for-profit sector 18, 25, 263-265 number of points 118, 146, 169, 213, 214 numerical rating scales 110, 112, 115, 119, 120, 122, 123, 165 open questions 127, 128, 257 opt out 98 Orange 9, 10, 181
ordinal scales 112, 113 organisational goal 1 overall satisfaction 5, 48-54, 121, 166169, 171, 173, 182, 220, 235, 238, 247, 253, 265, 267, 290, 291 paper based surveys 85 parametric statistics 111 Pareto Effect 73 performance indicators 272 performance measures 12 periodic surveys 105, 106, 107, 108 personal interviews 91, 92, 95, 96, 97 PFIs (Priorities for Improvement) 44-47, 125, 126, 147, 148, 152, 163, 174, 183, 185, 186, 187, 189, 192, 196-198, 207, 208, 216, 220, 254, 256, 260, 262, 267, 280, 294 poor performers 118, 122, 293 postage paid reply 85, 87, 91 postal surveys 84, 86, 97 precision 175, 176, 179, 183 preference 132, 136, 215, 218 price premium 19, 20 price sensitive 9, 26, 212, 213, 224 Priorities for Improvement (PFIs) 44-47, 125, 126, 147, 148, 152, 163, 174, 183, 185, 186, 187, 189, 192, 196-198, 207, 208, 216, 220, 254, 256, 260, 262, 267, 280, 294 private sector 19, 34, 293 problems and complaints 20, 137, 261 profiling customers 218 profitability 3, 19, 25, 35, 230, 293, 294 projective techniques 63, 64, 228 prompting 60, 192 public sector 26, 34, 130, 136, 292, 293 qualitative 37, 47, 57, 58, 58, 59, 61, 63, 64, 67, 68, 92, 94, 140 quantitative surveys of customer satisfaction 47, 57, 58, 59, 64, 65, 67, 92, 172, 214, 279, 280
311
index
312
5/7/07
10:06
Page 312
Index
questionnaires 37, 43, 44, 45, 48, 55-61, 64-67, 81, 82, 84-91, 94, 95, 96, 98, 99, 100, 102, 104, 107, 108, 110, 120, 124152, 164, 168, 170, 174, 183, 193, 204, 212, 214, 222, 243, 255, 256, 269, 274, 279, 285, 286, 289 questions 7, 10, 17, 37, 38, 39, 43, 4457, 60, 62, 63, 66, 67, 69, 84, 86, 92, 94, 97, 102, 106, 119, 120, 121, 123, 125149, 161, 162, 168, 174, 175, 179, 180, 182, 183, 193, 194, 198, 199, 214, 215, 218, 238, 243, 252, 255, 256, 257, 258, 259, 262, 265, 267, 274, 277, 278, 279, 281, 285, 290, 293 quick wins 8, 15, 191, 198, 253, 275 Qwest 22 random error 71, 76, 168, 175 random sampling 72, 73, 74, 75, 79, 97, 290 range 2, 7, 10, 26, 38, 46, 47, 50, 58, 61, 63, 66, 70, 76, 84, 92, 112, 123, 129, 131, 135, 151, 153, 156, 159, 162, 163, 172, 179, 181, 193, 202, 211, 212, 218, 230, 238, 244, 245, 253 Rater scale 36 rating scales 39, 61, 92, 110, 111, 115, 122, 128, 130, 132, 142, 145, 167, 290 rating scales - positively biased scales 39, 48, 114, 116, 120, 121, 122, 123, 124, 139, 141, 164, 193, 250, 290 rating scales - ten point scale 123 rating scales - types of scale 46, 47, 116, 118-121, 123, 125, 130, 132, 141, 146, 148, 151, 152, 155, 156, 159, 160, 167, 172, 190, 257, 293 recommendation 7, 26, 132, 134, 179182, 220, 239 recruitment of respondents 62 recruitment of staff 25, referrals 2, 19, 20, 180, 181, 182, 216, 232, 289
relative perceived value 201, 208, 209, 210, 211, 212, 214, 224, reliability -statistical reliability 9, 22, 36, 38, 49, 65, 75, 76, 78, 79, 95, 97, 103, 107, 120, 142, 166, 172, 175, 176, 182, 183, 204, 222, 230, 269, 272, 277, 290 reliable samples 65, 66, 71, 95, 96 reminders 87, 88, 107 repeat purchase 3, 121 repeating research 57 reply paid envelope (for self completion surveys) 87, 147 reputation 14, 22, 25, 31, 65, 99, 204, 213, 264-267, 283, 286 research agencies 251 response rates 10, 39, 79, 81, 82, 84, 85, 86, 87, 88, 89, 90, 91, 94, 95, 97, 98, 102, 104, 107, 108, 109, 204, 223, 279 return on investment 155, 231, 238, 239, 247, 256, 263, 292 reward 21, 23, 26, 35, 89, 90, 134, 216, 268, 275, 291, 293 reward and recognition 268, 275, 291, 293 routing 82, 84, 86, 94 running focus groups 62 Safeway 13, 20 sales and profit 20 sampling 14, 38, 59, 69, 70-81, 95, 97, 109, 168, 203, 274, 290 sampling frame 14, 71, 79, satisfaction gaps 113, 187, 188, 190, 191, 206, 207, 208, 224, 245, 280, 283, 290, 291 satisfaction improvement loop 106, 107 satisfaction index 16, 18, 20, 21, 23, 27, 29, 38, 40, 45, 67, 71, 113, 115, 120, 121, 123, 124, 150, 163, 164, 166, 169-176, 179, 180, 183, 193, 194, 195, 198, 200, 220, 244, 245, 253, 261, 263, 265, 272, 274, 275, 276, 277, 280, 286, 290
index
5/7/07
10:06
Page 313
Index
satisfaction related pay 23, 275, 276, 277, 278, 280, 293 satisfaction trap 3 satisfaction-loyalty relationship 5, 122, 242, 244, 245 satisfaction-profit chain 248 scatter plot 49 Sears Roebuck 21 segmentation 61, 138, 217, 218, 221, 222, 225 self completion questionnaires 35, 95,96, 102, 104, 107, 120, 121, 126, 127, 130, 138, 140, 141, 147,156, 214, 269 service quality 6, 11, 16, 22, 27, 31, 36, 38, 40, 41, 42, 56, 67, 68, 149, 152, 153, 155, 160, 162, 173, 184, 186, 187, 188, 192, 196, 197, 199, 240, 248, 249, 270, 271, 281, 287 SERVQUAL 36, 38, 40, 41, 42, 56, 68, 149, 170, 184, 199, 240, share of wallet 136, 215 shareholder value 10, 21, 22, 27 shareholders 10, 18, 19, 21, 22, 26, 27, 135 short term 14, 34, 128, 245, 246, 274, 275, 293 Smile School 13 software 82, 84, 85, 86, 133, 150, 159, 164, 231, 232 spend 4, 21, 24, 25, 32, 34, 100, 133, 134, 181, 182, 292 SPSS (computer software) 164, 165 standard deviation 77, 111, 113, 150, 158, 159, 164, 175, 177, 178, 179, 183 Starbucks 2, 3, 242 stated importance 46, 47, 50, 55, 56, 65, 66, 153, 154, 171, 172,182, 290 statistical analysis 83, 110, 124, 164, 290, 291 statistical inference 69 statistical modelling 26, 36, 119 statistical reliability 38, 49, 75, 172, 277 stimulus material 46, 63
stock prices 22 stratified random sampling 73, 75, 79, 97 stratified random sampling 74 street interviews 62, 91 structure of the questionnaire 130 sub-groups 78, 175, 176, 219, 269 switching 133, 136, 201, 215, 218, 222, 245, 246 systematic error 70, 74, 76 Table of Outcomes 254, 267 targets 166, 258, 262, 276, 277, 280, 291 telephone survey 65, 94, 95, 96, 97, 107, 108, 109, 156 thematic apperception 63 theme boards 63 time based questions 258 timing 105, 107 top performers 122 top priority 47, 61, 64, 152 total importance matrix 65, 66 total product concept 233 tracking 45, 58, 66, 67, 97, 105, 108, 118, 123, 137,147, 167, 169, 172, 193, 247, 250, 252, 253, 256, 262, 263, 267, 272, 290 trend data 29, 253 trust 19, 25, 132, 135, 215, 264, 265, 285 unclassifiable data pattern 237 understanding gap (concept) 268, 271, 280 unrepresentative samples 69, 107, unscientific surveys 38 users 59, 84, 85, 98, 218, 240 USP 10 utility 18, 34, 86, 89, 118, 133, 222, 292 value 3, 6, 10, 11, 14, 16, 18-24, 26, 27, 32, 33, 36, 38, 58, 73, 74, 75, 80, 89, 90, 96, 97, 99, 102, 132, 135, 136, 143, 151, 153, 156, 167, 180-184, 186, 196, 201, 208, 209, 210, 211, 212, 214, 215, 219, 221, 224, 225, 232, 233, 242, 252, 262, 263, 272, 281, 284, 288, 293, 294
313
index
314
5/7/07
10:06
Page 314
Index
Value-Profit chain 6, 16, 27, 184, 225, 281 variance 58, 109, 111, 119, 122, 123, 134, 150, 156, 158, 177, 221 venues (for focus groups) 62, 141, verbal scales 47, 112-118, 120, 121, 123, 150, 162, 163, 165, 292 visual prompts 92 Vodafone 9 voodoo poll 69, 75 web surveys 82, 83, 84, 85, 86, 95, 131, 164 weighting 169 weighted index 36, 46, 166, 169, 170, 183, 205 wow 229, 240, 247 wowing the customer 1, 2 zones 4, 5, 15, 33, 34, 66, 118, 122, 145, 167, 209, 210, 211, 242-246, 249, 292 zone of delight 243 zone of mere satisfaction 34, 243 zone of opportunity 244, 245, 246 zone of pain 243 zone of stability 246
The main purpose of all organisations is meeting their customers’ requirements. In a democracy it’s the only reason for public sector organisations to exist. For private sector companies the rationale is pure business logic. They must maintain revenues just to survive and most are aiming much higher, so to achieve their objectives companies must constantly manage and optimise present and future cash flows from customers. In most markets customers are not locked in. They have choices. As Adam Smith told us over 200 years ago, people seek pleasure and avoid pain, so they move towards companies that give them a good experience and away from those subjecting them to a poor one. Nothing, therefore, is more important to companies’ future profits than understanding how their customers feel about the customer experience and how this will affect their future behaviour. Customer Satisfaction provides the first fully referenced and comprehensive guide to this vital subject. “This book does a tremendous job of bringing to life customer satisfaction and its significance to modern businesses. The numerous examples contained within the book’s pages have proved a fresh and continuous source of inspiration and expertise as I work with my organisation in helping them understand why we should do what matters most to our customers and the lasting effect such actions will have on both our customer loyalty and retention. The authors are to be commended.” Scott Davidson, Research Manager, Tesco Personal Finance “I really enjoyed reading Customer Satisfaction. It was a good mix of academia, insights and case studies – this really carried the subject matter along and made it engaging. I would recommend it to managers looking at devising or revising a customer satisfaction strategy.” Mark Adams, Head of Service Experience, Virgin Mobile "Customer Satisfaction makes the case for monitoring and improving customer satisfaction in easy-to-read concise language, uncovering new insights and debunking a few popular myths in the process. It includes thought-provoking examples, comprehensible tables, graphs and diagrams, and engaging narrative synthesised from leading academic research in the area and the authors’ considerable experience. It will prove an invaluable tool for anyone tasked with improving customer satisfaction in their organisation whatever their level of knowledge or experience." Quintin Hunte, Customer Experience Manager, Fiat Auto UK
www.customersatisfactionbook.com