Cracking the conversation code with Computational speech analytics: Turning your voice data into great business outcomes
Computational speech analytics has enormous potential. But as Andrew Moorhouse, Director of Insight at Blue Sky points out, extracting the data that will genuinely lift your customer experience scores is trickier than it looks.
What are Computational speech analytics? This is the process of analysing recorded calls to gather customer information to improve the customer journey and future interaction. The process is primarily used by customer contact centres, figure out what constitutes ‘good’ customer service and replicate it to optimise the best ROI. We explore how successful those measures are on real-life customer service.
Imagine a company that receives 8 million customer calls a year – pretty typical for a large corporate contact centre. The Quality Assurance (QA) team might manage to listen to around 0.7% of all conversations – for a cost of around £4 million over five years. Value for money? Are you mad?
No wonder computational speech analytics have become such a big hit with organisations the world over. New speech recognition systems can analyse all conversation points with about 70% accuracy. That’s not to mention identify words and phrases that indicate whether the customer is receiving adequate service and warn if they are becoming increasingly unhappy or dissatisfied.
An AI utopia, right? Well, not quite.
Translating speech data into actual business outcomes is harder than many leaders think. Technology is brilliant, but blind. It requires human insight to ‘see’ what’s worth looking for, and human skills to interpret large-scale speech data in a way that will actually drive up CSat scores. Great tools need great minds.
The great technology trap: pairing the human touch to the AI
Unfortunately, too many organisations get so excited about the tech they fail to pair it with their people. They invest in expensive ‘out of the box’ solutions without questioning in-built biases that will hamper ROI. Worse, their customer service advisors believe their only remaining role is to game the system and try and win points from the AI, not the customer.
And the truth about what’s happening with your customers gets muddied, not clarified.
My first experience of the danger of computational speech analytics done badly took place in a large contact centre operation around four years ago – at the very real company I asked you to imagine above.
As a conversational analyst, my job was pretty straightforward: to listen to the calls coming in and determine which were driving a great customer experience, and which were not. This organisation worked from a simple 1 to 5 scale, rather than an NPS score. If the customer said they were very satisfied with the call and the outcome, it scored a 5; if they were very dissatisfied it scored a 1.
The organisation had recently put a speech analytics system in place. Unfortunately, it had also demolished its QA teams, in the belief that the computer would do their job. After all, a £4 million saving is hard to ignore.
However, for all the sophisticated tech the organisation had invested in, the CSat results were plummeting. Repeat contact stood at about 16% (within the hour), the level of complaints wasn’t shifting and AHT wasn’t improving. Something was going badly wrong.
We decided to look under the hood and examine what was really going on. We did some deep analytics on their data, looking at 33,300 customer interactions and discovered a startling truth. There was zero correlation between the insight coming from the speech analytics insight and the actual level of customer satisfaction.
That’s worse than no ROI. That’s a false feedback loop guaranteed to drag your outcomes down.
Coding for dissatisfaction: how successful is computational speech analytics?
There was an upside to this depressing situation, though. Deep within those conversations there were some fascinating findings that have gone on to inform a lot of what we do at Blue Sky now.
For example, we discovered that making an apology made no difference to the level of customer satisfaction. Nor did being polite and personable. These old ‘best practices’ might have impressed customers 10 or 15 years previously, but now they made very little impact.
In a fast and competitive digital age, smile when you dial is no longer a driver of satisfaction; it’s a simple hygiene factor that customers expect to show up.
Nonetheless, the out-of-the-box speech analytics programme was linked to a dashboard system that would give an advisor a green tick if it heard an apology, and a red mark if not. It didn’t know that this CSat criteria was useless; it was coded to think apologies worked.
The consequence? Advisors were apologising for everything! Different contact centre teams had created league tables to keep track of who was apologising the most. In fact the learning and development team had even started training advisors on the four stages of regret. And, of course, customers were just getting more and more dissatisfied.
Within the same system, another way that call advisors could earn a green tick was by finishing a call with the phrase “is there anything else I can help you with?” So, eager to please the system, they parroted out this hollow phrase again and again – to, our deep analytics showed, zero effect.
Uncovering invisible biases: behind the code
Worse still, some of the coding had infused the analytics with a personal bias. We discovered that if any advisor used the common, colloquial term ‘to be honest’ they were immediately scored down. Why? Because one person in the IT department who coded the system personally hated that phrase; he was from Dundee, and “to be honest” is a ‘verbal tick’ often used by Glaswegians. So, without consultation, he’d simply built it into the coding as something that should never be said. Ouch!
Thankfully, there is a happy ending to this AI tale of woe. Four years on, that organisation is rethinking its approach to speech analytics. It has rehired its QA people (after firing all 18), focused on empowering them with the skills that will help them bring their expertise to the technology, and managed to get its CSat figures climbing up again.
Where QA meets AI: Customer Service meets Conversation analysis
Back in 1997, Gary Kasparov (the reigning world chess champion) was beaten by a computer for the first time. He’d played against IBM’s Deep Blue in 1996 and won, but in the following year IBM upgraded Deep Blue… and the machine triumphed.
Today, there are deep learning machines infinitely more powerful than Deep Blue, so you would think they would beat humans effortlessly, right? But that’s not the case.
When you team an average-to-good chess player with a good chess computer, they are often able to beat a chess supercomputer playing on its own.
Point being, a combination of human intelligence and artificial intelligence can (for now) achieve far more than AI alone. Our aim should not be to replace people with machines but to work alongside them. We should use machines to give uniquely gifted humans additional super-powers.
And that’s where I see the future of computational speech analytics heading. By using humans to guide the ‘grandmaster’ programme towards what it should really be looking for, we can make sure our AI+ systems reward behaviours that truly boost customer experience – and result in a much more empowered, engaged workplace.
And who better to take ownership of this new opportunity than QA teams? With a dose of retraining and re-skilling, these experts are perfectly positioned to bring what they already know about excellent customer experience to their AI companions, in a way that delivers those longed-for results.
For many organisations, especially those who’ve taken on speech analytics quite recently, this represents a major skills gap. There’s also a considerable risk that those focused on the P&L – not the Directors of Customer Experience – end up being the ones who choose and mould the new operational model, based on false assumptions and flawed criteria. I know a number of utility companies with over 110 QA team members; ditch them, and that’s a tempting £22 million saving across 5 years. Up-skill these team players to deliver better business outcomes alongside your new AI, and your profits will far outstrip any short-sighted cull.
With the help of technology, Blue Sky is breaking new ground in so many areas of customer experience, and large-scale speech data gathering is at the vanguard. With our approach to human augmented speech analytics, organisations really can turn that curse into a massive opportunity. Leave out the human part, however, and they’re destined to repeat their customer mistakes for years to come.
Does your business need help cracking the code of your conversations? We’re here to help, so get in touch.