Your Tech Talks in 20 Podcast Subscription has been confirmed!
Please be sure to whitelist @email.genesys.com to ensure you receive Genesys emails.
October 4, 2023 – Duration 00:34:55
Even as consumers’ use of bots increases, their satisfaction with the experience has declined. In fact, consumers are less satisfied with chatbots than all other service interaction channels, according to “The state of customer experience.” In this episode, Mitch Mason, Principal Product Manager, Genesys Conversational AI, provides easy ways to improve the bot experience, as well as practical tips on using bots to streamline customers journeys, improve satisfaction and increase issue resolution.
Want to listen more?
Subscribe to our free newsletter and get updates in your inbox on new episodes released each week!
Subscribe to our free newsletter and get new episode updates in your inbox
Principal Product Manager, Genesys Conversational AI
Mitch is the Principal Product Manager responsible for Genesys Conversational AI. He uses his service and technology background to steer innovations from conception to value. He has hands-on experience in digital, voice, analytics, integrations and NLU. Prior to Genesys, Mitch was a product manager for IBM Watson Assistant.
Here are conversation highlights from this episode, edited and condensed. Go to the timestamps in the recording for the full comments.
Mitch Mason (01:37):
I’m a product manager on the conversational AI team. Specifically, I manage both the voice and digital bots, so we work closely with all the services in the AI experience — anywhere, you might see natural language, my team is most likely involved.
Mitch Mason (01:57):
Often, we see the 80/20 rule, where 80% of the people visiting any a support site are asking the same couple of questions. Those are generally going to be the most valuable ones to automate. They’re going to be more standardized, so you can offload that from your agents.
If it’s something like an investment use case, for example, “Where should I put my money?” — this is obviously a critical thing someone needs in-depth analysis on versus using a bot. But a bot could easily answer basic questions about fees and could collect some information to improve the value of the bot.
Mitch Mason (03:59):
It’s a great way to start with bots. You can gather a lot of data, get used to the bot technology, figure out what you want to and should do with it, and identify problems to solve using natural language understanding, or NLU, and then pass that over to someone who can take action.
And as you start to see, “We’re sending 80% of our calls to this one endpoint in that triage, so let’s automate that,” you can gain a lot of trust in the system. You can learn a lot along the way without exposing yourself to risk.
Obviously, if you don’t train the bot well, it might not perform well either, so there’s certainly some risk the first time you venture into this type of product.
Mitch Mason (05:30):
One of my favorite use cases is The National Domestic Abuse Hotline. It’s in a sensitive industry and something a lot of people are, for good reason, almost afraid to talk about, whether it’s for judgment or for their own safety. The Hotline is a nonprofit organization, so they have limited resources, even though they’re on a noble mission.
A lot of times, someone will reach out who may be in danger and has to leave immediately without ever getting to an agent. If they can stay briefly on hold, they can talk to a bot, which will do the basic triage, address any immediate needs 24/7. So, it’s a great way to get someone to be heard.
And then one feature that we helped release is a no-input timeout: If a person doesn’t send a message to the bot after a certain amount of time, we’ll ping them and ask, “Are you still here?” or “Are you ready to be transferred?” This is helpful because, if someone messages the bot and then has to leave either for their safety or for any other reason, and if an agent picks up that chat, they’re actually wasting valuable time that could be spent helping another person when no one’s on the other end.
It’s helping both sides of the picture in that the person knows that someone is on their way to help them, and the agent isn’t spending valuable time picking up a chat when no one’s there.
Mitch Mason (07:31):
Let’s dive into an end-to-end story of how someone might interact with a bot. Say Hannah is a person who just bought a coffee maker, then visits the company’s and scrolls around. We know a little bit about who she is, she might be logged in. So, we use predictive engagement to reach out. There are two ways to do this: either through segments or outcomes. Either way, the goal is to notify someone that a chatbot is available to help. If you were to offer this to everyone, it’s annoying. If you were to offer this to nobody, you’re missing opportunities.
Again, predictive engagement can either be set up through segments or outcomes. Segments are a rules-based method to say, “If a person does these things or fits this profile, let them know the bot is there.” In the Hannah story, we see in her profile that she’s eligible for a recall, so the bot can proactively message her and say, “I see you have a recall. Let’s talk about that.”
Outcomes are a more AI-based approach where you can define a specific outcome: “I want to message people who are most likely to purchase a new coffee machine,” for example. Maybe they’ve visited certain pages, they’ve done certain actions or fit a profile. AI learns over time that these people are most likely to purchase.
With this approach, the AI starts out just observing the different people who buy a coffee machine. And then it will learn over time these specific features that will prompt it to say, “I can help you buy a coffee machine.”
Going back to Hannah, she says, “If my machine is available for a recall, I need to check that out.” She clicks to chat, and the bot gives her preliminary information and maybe some choices like a mail-in rebate or speaking to agent or receiving a knowledge article that walks her through the process. Hannah wants to talk to an agent, so this is where predictive routing comes in.
As with predictive engagement, when you first turn on predictive routing, it’s going to watch interactions between the agents and the customers and learn which agents are best at solving which problems. It will also learn that in some cases, even if that agent is not the next available, it might make sense to wait one or two minutes for this agent because they give a better experience and will handle it faster than someone who might still be learning the process.
Mitch Mason (11:55):
This cycle comes around every few years. When we first started doing intent-matching machine-learning NLU, people thought it would be hyper conversational and understand a lot more questions. But there was still building this rigid script of what the bot should do, versus free-flowing conversation. We’ve made a lot of gains there. We’ve made things more flexible, but it’s still not as natural as a human, especially with more complex questions or utterances.
Most recently, we’ve probably all seen the generative AI technology that’s coming out. On the surface, it all looks impressive and is doing some powerful things. But it’s not perfect either. There’s risk that can hallucinate. It can leak data.
If you’re on your website and you’ve trained it on your data, there’s a bit less risk.
It alludes back to what bots are good for. That’s what you want to focus on. The sweet spot is where you’re going to gain the most value and see the least risk.
You should expect it to still follow some type of script. You hand an agent a script and they can jump in and out. They can weave, they can change scripts. Some of those things are going to be possible with a bot, not all of them. This thing’s not going to be perfect. It’s not going to answer a hundred percent of your questions. Your agents don’t answer 100% of the questions accurately.
The expectations should be set, especially with testing as you’re building this, that maybe you have a standard test set of questions you want to keep the bot trained on. It’s never going to be at 100% accuracy, and if it is, you’ve probably overfit for those things in your training test.
The reality is customers are going to ask unexpected questions, so train the bot over time and keep learning as you go. You’re always going to be growing this just like you would a new employee.
Mitch Mason (15:14):
There are some things to consider. It’s going to depend on who you’re talking to. For example, back to the coffee scenario, Hannah doesn’t want in-depth information about coffeemaker options. She just wants to press a button, get a cup of coffee, and get back to work or her day and drink a nice cup of coffee. Whereas your agents are domain experts, so they want all the extra information.
So, not only does that guide what type of content you should be training the bots on, but it might also guide what technology you should use. If you are deploying a bot onto your site, you can either use a knowledge portal, where it’s easy to deploy FAQ-like experiences or simply surface a knowledge article, or a more conversational bot that offers a very guided, handheld, comforting experience.
Also consider that some people might have a direct problem they want help with and others are there more to discover and learn about what products you offer. Maybe they want to compare those. So, having a bot that can give an answer to a question, as well as provide recommendations and links would be useful in that scenario.
And make sure you’re fine tuning the content for customers, as well, not just for the bot.
Mitch Mason (18:49):
First, always use as real of data as possible. I’ve seen time and time again where someone says, “I want this bot to help with doing a recall of a coffee machine” to stick with the same example. And as an expert, I know how I would talk about that, but Hannah, the customer, will most likely not talk about it the same way the expert would. And the more complicated your industry gets, the bigger this gap becomes between the customer who might be asking the question and the expert who’s training the bot.
So, make sure you get real data about how customers are interacting and how they’re going to ask questions. And then, train your bot using that data because, otherwise, you’re going to fit your model for an expert. A new user’s going to ask a question in a way that doesn’t match anything you’ve trained the bot on and it’s not going to know what to do, even though it’s a basic question.
The second one is to understand the persona that you’re going to be interacting with. If they’re an expert user, you can use acronyms and jargon, but if they’re more of a novice, make it suitable for them.
The third best practice is going to be about how far the bot can go for a user. The more the bot can do for the user, the more satisfied they’re going to be with that experience. If you have a bot that simply says, “Click this link and go to this page and do it yourself,” the person may have already seen that page and struggled. They may have questions about it, they may want to have a conversation.
A better experience is to have a bot that’s able to do a data action: using a back-end integration to pull a user’s profile and update it based on what they wanted to do. One example is if someone were to ask the bot, “Where is my package; the tracking number is X?” The bot could refer the customer to another page to type in their tracking number, or better, do a data dip, look up that tracking number and give them the location of their package.
Being able to take action for a customer goes a long way, especially when you’re on other channels outside of the website. If they’re on the website already and you point to another page, that’s not so bad. But if they’re on SMS or social, not taking them out of the experience that they wanted to be in is a great way to build a much better experience.
Mitch Mason (23:14):
If you’ve never tested a bot experience with anyone outside your direct peers — someone who would ask things in a different way — you run a high risk of building a bot that is so fine tuned for the way you see it, that someone else will come in and have a poor experience.
There are a number of ways you can test your bots. One is a regression test. You pull out some amount of data, typically it’s 20% of the number of training utterances. The more you have, the better; but also it’s data you’re not training on. So, you never train the bot on these top questions. Someone asks one of the questions and you test that the bot will get some number of those right — obviously the higher the better.
You’ll go back, edit your training data; maybe add new utterances and take some away, combine intents or remove intents. Whatever you need to do to improve things. Then you run that same exact blind test set, and ideally the results go up. If the results go down, you go back to the drawing board. You do this until you see diminishing returns, where the improvements you’re making are having less of an effect, they’re adding less value. And eventually you’ll be satisfied.
Mitch Mason (27:03):
We’re always trying to make it easier to build bots. Obviously, we see a lot of potential in some of the generative AI technologies out there…
One of our current research efforts is on generating a bot — the intents, the slots and the flows — automatically off a very small amount of data. And you still get the chance, and really the requirement, to edit and approve whatever has been created. Of course, someone’s going to want to fine tune it for their industry, for their brand personality.
And then secondly is a focus on the end user experience. One of our most recent releases is cards and carousels. When you build a bot on a digital channel, you can use cards and carousels as a rich media to interact with the user. We want to make it so each channel you’re on has the highest fidelity, the most optimal experience you can give. That’s where our focus is going: investing in creating as rich of an experience as possible.