Customer service chatbots: Optimizing after launch
After spending months carefully developing a customer service chatbot, launch day is a welcome sight. It means the work is done and the hard part is over. It’s time to sit back, relax, and watch the chatbot flawlessly handle customer concerns.
This situation would only happen in an ideal world. At Dashbot, we believe launch day is day one. A customer service chatbot is a living, breathing organism, and like any living thing, it needs fuel to thrive. In this case, constant optimization is the food that will ensure that the chatbot will continue to provide great support to your customers. But how does one go about improving the experience? Here are the steps that every chatbot team should take after launch.
Data is your friend
We always stress the importance of analytics when it comes to building a great conversational experience because it’s the only way to track performance and get access to data necessary to the optimization process.
Building chatbot content can take months. Knowing what customers actually found useful will help make developing future iterations a more efficient process. Access to conversations and reports to the accompanying data is critical to that effect. Generated training data or premade data sets are a good starting point, but user conversations will show developers and conversation designers how customers really interact with the chatbot. By using customer data, the chatbot will be trained to meet users where they are.
### Identify chatbot response failures
A freshly launched chatbot won’t be able to handle everything a customer wants to throw at it. Humans are unpredictable, so it’s very likely that developers will not be able to anticipate every single support question customers will want to ask. Including a fallback response— a catch-all message– is a smart way of handling those unpredictable moments.
So what does a good “catch-all” message look like? Though it sounds like it would be a Gordian Knot of a task, it can actually be quite simple. Say a user has an issue the chatbot doesn’t cover. Here’s what a fallback response would look like.
This message reminds the customer what the chatbot is capable of and offers them a way to escalate to a live agent who can address their concerns. It directs the customer back on the happy path (one where they can get their support questions addressed) and mitigates the frustration they would otherwise feel if there was no fallback.
There’s another upside: every time the fallback Intent is triggered points to an opportunity for optimization. By examining what incoming messages aren’t being handled, it’ll show what expectations users have and how you can rise to meet them.
Knowing what users are frequently asking will point conversation designers to content flows to build, and having a tool like Dashbot’s Not Handled Report will streamline the process of grouping together similarly unhandled messages. Knowing which clusters of messages have the most volume will point to the Intents with the most value to customers.
Additionally, the NLP model can’t be trusted to get it right every time. People have a lot of unique ways of asking for the same thing. Phrasing can be ambiguous, so it’s not unusual for a customer’s message to get mapped to the wrong Intent. It’s vital to catch those mistakes. Asking for one thing and receiving something completely different will not only lead customers to believe the chatbot doesn’t work, but they will also exit the experience with a poor impression of the company itself.
Without access to data, these mistakes would be nearly impossible to catch. Having analytics in place will ensure that someone can monitor chatbot performance and check that Intents are getting correctly mapped. With Phrase Clusters, that task becomes simple. Every message is grouped by semantic similarity and tagged with the mapped Intent. Since each group of messages should have the same Intent, the mishandled messages will stick out like a sore thumb. From there, it’s just a matter of exporting the Clusters to retrain the model.
Meet customers’ rising expectations
When first launching a new chatbot, starting small early will lead to scalable success in the future. If it can answer a few support questions well and quickly escalate to an agent if the issue is something it’s not equipped to handle, then customers are going to use the chatbot happily. We consider that a success.
But with success comes rising expectations. Customers are going to want the chatbot to handle even more complex support issues and handle them at the same level of quality as the Level 1 questions. That’s why launch day is synonymous with day 1 at Dashbot. In order to provide high quality customer care through a chatbot, it needs to constantly improve. Optimization is not an optional step; it’s a mandatory part of providing top quality support. Data analysis is vital to that end.
Fortunately, conversational interfaces lend themselves well to analytics. Each interaction is rich with data because users are telling the chatbot exactly what they want in their own words. It would be a waste not to make use of this data. By choosing the right analytics platform to dive deeper into each conversation, the insight gained will guarantee users that their experience will continue to be satisfying even as their support questions become more complex.
Dashbot is an analytics platform for conversational interfaces that enables enterprises to increase satisfaction, engagement, and conversions through actionable insights and tools.
In addition to traditional analytics like engagement and retention, we provide chatbot specific metrics including NLP response effectiveness, sentiment analysis, conversational analytics, and the full chat session transcripts.
We also have tools to take action on the data, like our live person take over of chat sessions and push notifications for re-engagement.
We support DialogFlow, Alexa, Google Assistant, Facebook Messenger, Slack, Twitter, Kik, SMS, web chat, and any other conversational interface.