Your Genesys Blog Subscription has been confirmed!
Please add [email protected] to your safe sender list to ensure you receive the weekly blog notifications.
Don't Show This Again.
We’ve all heard the saying, “Data is king.” It’s often expressed rather casually about artificial intelligence (AI) and machine learning. But it’s a concept that’s much easier said than implemented. We see it with businesses that gloss over the need for data. Or they don’t pay enough attention to it before embarking on AI, which is typically with bots.
This first in a series of blogs that explores issues businesses need to consider around AI data. We’ll discuss how Genesys is addressing these issues and lessons we’ve learned along the way.
Where the AI Love of Data Began
In the 1980s, speech recognition technology was the early trailblazer for how AI works with data. Although it initially recognized only numbers, US and European governments began a concerted effort to build on this technology. Their goal was to make data available for research — and to enable research focused on algorithms. Data curation, although tedious and time-consuming, fueled cutting-edge research to improve accuracy using statistical modelling techniques.
Today’s there’s an explosion of advancements in “computer vision,” which captures information from images or multi-dimensional data. This also occurred because of an effort by academia and governments to provide data to perform research. MNIST and ImageNet are good examples of such data. However, computer vision is quite different from conversational AI. The variety of ways in which humans communicate verbally, the complexity of the modalities such as voice, and the semantics of conversations that aren’t always well-defined, make conversational AI very challenging
What’s Available Today Is Not Enough
Unfortunately, conversational AI suffers from a lack of research data that hurts the industry, in general, and Genesys in particular. Lack of data makes it hard to compare technology that powers voicebots, chatbots and agent-assist types of systems. For example, think about benchmarking how you recognize the intentions of customers as they engage with a voice assistant like Alexa or Cortana — with intent designed for a banking bot, if that’s your industry. Otherwise, it’s like comparing your living room lounge chair to a dentist chair. Tuning or calibrating our algorithms for such tasks designed for personal assistant bots wouldn’t be enough for us to confidently deploy the technology to our customers who have more professional and task-oriented needs
Whatever data that’s available to the research community has weaknesses, including:
Genesys and Data Collection for AI
As Genesys develops AI and machine learning technologies that become part of our product portfolio, we had to address these data challenges with bots for our customers and ourselves. There’s a lot of knowledge worth sharing about insights we captured along our journey. These include defining an appropriate domain for the bot, capturing sufficient complexity that reflects typical use cases in the industry, expressing empathy, and defining KPIs for AI and user experiences.
This blog series highlights the various facets of this effort and lessons learned for those embarking on data collection efforts.
Project Goals and Expectations
With these challenges defined, we set out with several goals. First, we wanted a dataset that covers the functionality we expect of conversational AI systems in general, such as a reasonable number of intents, some intents with entities, bot confirmation and disambiguation.
This dataset also needs to measure the success of a bot from many perspectives, such as:
We also expect this to become a planning template for other internal data collection exercises, and a set of best practices for our customers. Finally, we expect this dataset to help customers immediately see the awesome value in our AI technology. We believe it’s as valuable as a white paper that details KPIs of interest to customers.
As the algorithms evolve over time, this dataset can also be part of regression testing. With AI algorithms that learn from data, having a regression test will be a good gatekeeper. It can stop us from accidentally pushing changes to code that may help one aspect but break functionality in other areas. This is very critical in the DevOps form of software development.
The Data We Want and How We’ll Capture It
We started this initiative by developing a banking bot using scenarios based on our knowledge of the banking domain and its unique requirements. We simulated real-life banking scenarios and collected our own conversational dataset in the banking domain.
Genesys employees used the bot to complete tasks with little to no guidance on how to interact. The focus was on what bots deal with in the real world. The banking industry has been the early adopter for AI-driven self-service, so it made sense to use that domain for this work.
In Data We Trust
Implementing AI for your business is more than simply training a bot to perform a task. It’s an ongoing process in which your well-designed bot can continually learn and improve interactions with your customers.
Based on data captured and analyzed on how users interact with our bot, we’re learning how effective it is — and what we need to do next. This includes bot design, algorithmic improvements and developing features that are aligned with what our partners and customers need for them to be successful.
In subsequent blogs in this series, we’ll discuss details of the domain and aspects of the data we’re collecting. We’ll also look at our performance metrics and where we stack up to other companies and products, like Google Dialogflow.
Get started on your AI journey in one of our Build a Bot workshops.
Subscribe to our free newsletter and get blog updates in your inbox.