The Good, the Bad, the Ugly and the Truth About AI

In director Sergio Leone’s classic film “The Good, The Bad and The Ugly,” it was clear who filled each role. The star Clint Eastwood was “good,” Lee Van Cleef was born to be “bad” and the cunning and “ugly” bandit was Eli Wallach.

Today, the lines that separate good, bad and ugly have blurred; it’s not easy to discern what’s true and what’s not. And for those of us in technology, the demands on sorting that out are critical and ongoing. Solving this problem is one of the drivers behind an important new feature in Genesys Dialog Engine.

Looking to AI for Answers

Let’s break down the source of our confusion over artificial intelligence (AI) and see what’s missing.

  • The Good: In recent years, there’s been a huge and growing interest in AI. It’s new and exciting with countless videos and articles about how technology will “change the world.” Vendor demos show AI working perfectly without flaws. When I tell people I work in AI, I’m often asked, “Do you think robots are going to take over the world?” People have high expectations of what AI can do.
  • The Bad: The media have hyped up AI so much that its portrayal doesn’t match the reality. All this hype has changed people’s expectations, particularly when it comes to customer experience. And when it’s about customer support, the expectation for resolution has changed dramatically. According to Accenture, 87%of organizations agree that traditional experiences no longer satisfy customers. They expect problem resolutions to be quick, easy, 24/7, and accessible from multiple devices and channels. They’re expecting a lot more than they used to. This is a huge ask for companies to keep up with.
  • The Ugly: Many analysts and other experts claim that bots or digital assistants are the answer to the new high expectations in customer experience. By 2022, Gartner predicts that 70% of all customer interactions will involve machine learning, chatbots, and mobile messaging. But most companies don’t have speech scientists on staff or the typical AI specialists who can help achieve this new vision. For example, have you ever tried to create a bot? It’s not that hard, actually. However, a common problem is that when creating a bot, you have no idea how well it’s going to perform for your customers. You could spend a lot of time doing manual testing or you could try it out on a small number of customers and search through reports or logs looking for the answer. But wait, what if you simply use AI to make your life easier? Or is that just more hype?

The Truth: Dialog Engine and Accurate Intents

While we’re a long way from the average Joe being able to build another Amazon Alexa in one afternoon, we’re taking steps in the right direction to make AI easier and faster. And yes, you can use AI to make your life easier and figure out how well your bot will perform. It’s part of what we’ve done at Genesys.

The “Intent Accuracy Report’’ is a new feature in Genesys Dialog Engine. For those of you with a data or speech science background, you’ll recognize this report as a confusion matrix — a specific table layout that allows visualization of algorithm performance.

This new report gives you insight into how your bot might perform in the real world and highlights areas for improvement. Based on the training utterances within your bot, the report will highlight intents that the bot could confuse with other ones when interacting with a customer. It’s a visually intuitive report that pinpoints areas for improvement before the bot gets to your customers.

And Nothing But the Truth with Dialog Engine

With any bot, good training utterances are imperative. If the training utterances don’t closely reflect what an actual customer will say to the bot, even the best reports are tainted. But we’ve got you covered here with our top tips found in our documentation.

Genesys uses AI to give you a predicted performance for your live customers so that you have confidence in your bot before going live. Once your bot goes live, however, your next question might be, “How well are my training utterances performing for my live customers? Can I use AI to make my life easier?” This is a bit harder to answer.

You can report on how well the overall bot is performing, but not so much on the training utterances. A common problem in AI is that you normally need labeled data to know how well something has performed — this data is usually hard to get. It’s like most things in life, but it sounds a bit more complicated when AI folks start talking about labeled data.

Here’s how I look at it. I know all my times tables, so I’m able to mark the performance of my nephew’s math homework. In this case, the times tables would be the labeled data; it’s basically the answers. It’s the same with a bot: To know if the bot got a customer utterance right or wrong, you need to know what the answer is.

The Near Future of Genesys AI

To tell whether the bot understood a customer utterance correctly, and therefore, tell you about its performance, “something” needs to know the answers to the thousands of live customer utterances. So, being able to say “The live bot was 75% accurate” isn’t actually an easy thing to do. But don’t worry; we have a team of people working on that “something” to extend Dialog Engine capabilities.

One day soon, we’ll be able to say, “Yes, we’ve found another way for AI to make your life easier!” At that point, you’ll have a lot more “good” and a lot less of the bad and ugly.

Learn about all the latest Genesys innovations available to you in this on-demand webinar.

Share: