AI Ethics: One Year Later

The rise of artificial intelligence (AI) in the customer experience space was — and still is — raising new ethical questions for a wide set of stakeholders, including individuals, employees and employers. So last November, we introduced our first set of AI Ethics Guidelines. These were created to provide some guidance both inside Genesys and among our customers and community. We also wanted to study this topic as a cross-functional group and share our findings. As part of this work, we collaborated and wrote blogs about a broad spectrum of subjects that you’ll encounter as you incorporate AI into your business.

We also commissioned global research to delve into employee and employer attitudes toward AI. We found that there’s broad acceptance of AI coming into the workplace; 70% of US employees polled hold a positive view of AI in the workplace.

What We Learned

Throughout a series of discussions at customer and internal events this year, we learned was that this is a topic of huge interest to many people. But they didn’t want a debate. Instead, they sought information and a chance to discuss AI.

Recurring topics of interest included data management, how to avoid bias, and what’s appropriate — and what’s intrusive — when it comes to personalization. We also thought that our customers were going to be further along their ethics journeys than they actually are. Most companies are heavily investing in AI, but the effects of the AI revolution have yet to be fully realized. And, currently, many of our customers are in fact-finding mode.

Our engineers are the most motivated group to figure out how to execute AI in an ethical way. Internal debates have led to this being a running topic and, more importantly, we now require mandatory training of all our engineers in the principals of AI ethics. They asked for this — and now it’s built into our training programs.

What Happens Next

The internal training is just the beginning of how our principles will shape the product development process. We’re also looking to add AI ethics questions in all our Product Requirement Documents (PRDs). These questions will act as a forcing function to make ethics part of the everyday process. For example, we’ll ask questions like: “Does the use of AI add substantial benefit to my customers?” or “Can the customer know what we know about them?” or even, more pointedly, “Can AI engines explain decisions or show reasoning — and would the agent be comfortable communicating that reasoning to a customer?”

Taking the subject to the people who are most often on the front lines of consumer questions or concerns about the use of AI are the population of employees spread across our customer base. To assist them in understanding AI ethics, we’re building courses that can be delivered to agents about AI and its impact.

We also continue to monitor and prepare for new regulations, some of which touch on AI ethics like GDPR and the forthcoming California Consumer Privacy Act. Different governments are tackling AI ethics in different ways — from the Government of Dubai, which published its own guidelines, to the Australian Government, which has commissioned a consultation process via the CSIRO Commonwealth Scientific and Industrial Research Organization (CSIRO) — the organization that originally defined WiFi standards in the 1990s.

While the full capabilities of AI have not yet been realized, we need to proceed thoughtfully. And we’ll continue to do that. Genesys is passionate about making AI a true benefit for society. That means we need to be excited about its potential but also respectful of the impact. We expect this conversation to continue for the foreseeable future as the tech industry — and the public — discover this brave new world.

Stay up to date with our AI Ethics blog series and join the discussion online.

Share: