Are you playing Whack-A-Mole or Connect 4?

In a previous blog “Weighing the Pig”, I already outlined the importance of focusing the Voice of the Customer (VoC) programme on the drivers of the customer experience. A single metric alone, whether it is NPS, NCE or C-Sat, is not enough to support a CX programme and focus the organisation’s efforts on improving the customer journey and strengthening the customer relationship. This metric needs to be supported by strong driver analysis and understanding – over time, as a single snapshot will not give the right indication of performance trends and changes in their driving forces.

When basic is not enough

I have seen some retailers still collecting sporadic customer feedback on paper slips handed to customers in stores, asking them to tick a score or two and possibly add some comments, before posting it in the ‘Suggestions Box’ in exchange for being entered in a prize draw of some sort. Just picture the scene. The lovely lady has just managed to get around the store ticking off the items she needs from the long shopping list, annoyed that those store people keep changing the location of the products in the aisles and sales assistants are nowhere to be seen, so she has to walk back and forth several times trying to find them – probably with one or two screaming kids in tow. Then she has to wait in a long queue at the till to pay, trying to unload the trolley while preventing the kids from grabbing the sweets on display at far too easy reach, and quickly pack away her shopping in her re-usable bags-for-life, desperately struggling to tame the now totally bored and uncontrollable kids. Balancing bags, kids and till receipt, she is now also handed a paper slip and a pencil by a smiley customer service assistance, promising her a fantastic holiday in the sun if she were so kind to fill in the ‘suggestions’ form and lucky enough to be the 1 in a million respondent who will be picked out of the hat to win. Maybe the prospect of a holiday in the sun is enticing enough at this specific moment (feeling like a donkey carrying 5 shopping bags, 2 screaming kids, 1 till receipt, 1 suggestion form and 1 pencil) to convince her to briefly lay down the load and fill it in – but likelihood says that by the time she has written her contact details (or how could they possibly tell her she has won?) and ticked those 2 scores, she will not have the time nor inclination to provide any more useful, actionable ‘suggestions’ or feedback.

Other retailers have freestanding machines at the exit of the store with 3 traffic light buttons: a green Mr Happy, a red Mr Grumpy and an orange Mr Bored. While these could be used to get an on-the-moment pulse check, in isolation they are also totally useless in helping management understand why customers reacted in that way. In addition, I have seen adults testing the buttons randomly to see what they do and whether they make sounds (yes, I admit I also did that, and yes, they do emit mechanic disgruntled or cheery voices!), while kids happily play with them an exciting game of Whack-A-Mole, making the ‘feedback’ results completely unreliable anyway. Well, at least these games provide some entertainment factor…

Connecting the 4 Vs of an effective VoC programme

A Whack-A-Mole system will not provide a true indication of the customer experience, as it is not accurate or reliable. In fact, you can whack as many moles as you want, but they just keep popping up again and the picture will be constantly changing and shifting in a random, unintelligible way.

Instead, in my experience, a Voice of the Customer programme should be like a carefully managed and planned game of Connect 4, built around the ‘4 Vs’ of effective customer insight: Validity, Volume, Velocity and Variety. Only when the 4 pieces slot together, properly balanced and aligned, you will win the game and have strong and actionable customer experience data.

1. Validity

Reliability and accuracy are essential to give the right credibility to the CX data. Without this, you will spend all your time trying to defend your insight and conclusions and prove their correctness to the exec team, rather than focus on making the case for the decisions and actions you are trying to drive.

To achieve this, the Voice of the Customer programme needs to be carefully mapped across the end-to-end customer journey, supported by a rigorous system of sampling and feedback collection in a structured way to inform quantitative metrics to measure performance – as after all, people like to see performance numbers and do not trust ‘sentiment’ as a key KPI.

It is also essential to remove any opportunities for manipulation or control over which customers receive the surveys. Some organisations leave that in the hands of the customer service representatives, who are entrusted with opting-in customers manually or through telephony or digital solutions. While ‘trusting your employees’ is probably one of your cultural values, this process makes it very tempting and easy to avoid sending surveys out after less positive customer interactions, especially if CX scores are linked to individual performance management or remuneration.

This is where in-house processes often fail, and I do recommend the use of a tested and impartial solution from an outside provider.

2. Volume

I have spent many hours in my career debating whether some scores were statistically reliable due to small sample sizes – especially when they indicated poor performance and the touchpoint owners were more concerned with looking for justifications rather than solutions. In fact, even if simply to remove any opportunities for playing ‘Get Out Of Jail Free’ cards, having statistically robust response volumes is very important, whether you are measuring your performance or benchmarking against other organisations. If nothing else, insufficient answer volumes can lead to wrong insights and conclusions if they contain performance outliers – and therefore inform the wrong decisions.

Also, this data collection needs to take place progressively over time, since trends are as important as the scores themselves, as they indicate whether things are improving or deteriorating. A once or twice a year survey, even if very detailed, will never give you a clear view of performance and how it is evolving, and be impacted by other external factors like seasonality or social-economical events. But more importantly, finding out that you had a big service issue six months ago will not be very useful to drive immediate remedial action, unlike a real-time predictive feedback system could do.

3. Velocity

Another key consideration is time. Sending a feedback survey to a customer one week or more after the interaction is not very useful, as time will have diluted their recollection of the details and therefore their responses will be on average more ‘neutral’. It is a bit like giving birth – ask a woman immediately after the event whether she will ever do it again, and you will receive a very detailed and powerful rendition of all the rational and emotional reasons why she will never ever repeat that experience! And a couple of months later, there she is, recounting that it was not that bad and she cannot wait to have her next kid.

Surveying customers as soon after the interaction as possible will provide the most accurate and complete feedback, as they are more prone to sharing the details necessary to deep dive into the drivers of the experience.

But another key component of ‘velocity’ is the length of the survey to make it fast for the customer to respond. Once upon a time, people were happy to receive a 4-page long paper questionnaire or to sit through a 20-minute telephone interview to recount half their life’s story to market research agencies – but gone are those days. Nowadays, customers are bombarded with surveys from far too many sources, and there is nothing worse than receiving a questionnaire which ‘will only take 10 minutes’ to fill in. That’s 9 minutes too much!

4. Variety

Finally, the VoC programme should be structured to cover all the key touchpoints of the end-to-end customer journey through all the different channels (physical, phone, online, webchat, social media etc.), as well as the overall relationship the customer has with the organisation and the perception of the brand.

I am a strong advocate of combining Touchpoint feedback with Relationship feedback, as they have important complementary aspects. Surveying customers after the key journey touchpoints will enable their management and improvement in a focused and targeted way, down to individual employee, process and product level. On the other end, it is important to understand their overall experience with the organisation, by carefully selecting the right moment in the customer lifecycle to ask the right questions. This survey will also provide more insightful understanding of the relative importance of different factors (like product, price, service, brand, reputation, trust etc.) and their overall level of engagement and loyalty – i.e. the ultimate NPS score you should peg your hat on, as this is based on the sum of all the cross-functional parts.

In conclusion, you need to slot in and carefully connect your 4 pieces to have accurate customer experience data. Your next task will then be to turn this data into actionable insight and kick start your CX Programme – but this will be the topic of another blog, so watch this space.

 
Group.jpg

Comment, like or share this blog to keep the customer experience conversation going and help us create CX excellence