AI Ethics: Beware of AI Ethics Washing

The advent of artificial intelligence (AI) technologies brings with it ethical risks. Many companies have assembled dedicated teams and committees as a way to anticipate potential risks that the inappropriate use of AI technologies could create. The idea of regulating and managing AI to address ethics-related questions helps brands convince consumers, lawmakers and investors that they’re doing things right.

When creating these ethics teams or committees, some large tech companies like Google, SAP and Genesys, have developed internal committees, some have partnered with other companies and still others teamed with universities. And, after ethics scandals like the Facebook/Cambridge Analytica issue affected stock prices for certain large tech companies, those committees are viewed as tools to bring confidence in future business cases ahead of upcoming IPOs.

However, some brands have faced criticism and have been accused of “AI ethics washing” — when a company is actively engaged in various ethics partnerships but doesn’t act on it and isn’t held accountable. And several companies have failed to manage AI ethics for multiple reasons, whether it’s related to issues with AI models or constituting an AI ethics committee with members who’ve had ethical issues in the past. Additionally, researchers working for institutes or partnerships sponsored by large tech companies have been told to leave those dependencies, so they don’t fall into ethics traps.

AI Ethics Washing and Accountability

At Genesys, we involve the entire product organization by addressing ethical questions early on — during the product development process — to make sure the products we develop are transparent, fair, provide social benefits to customer experience employees and protect the data of our customers. In addition, we work to provide training for engineers to sensitize them to the risks of using AI technologies. Finally, we make sure that our customer-facing teams are well-equipped to discuss those topics and collect feedback from the field on the risks that customers face.

Being accountable for developing ethical AI technologies in our products means that we need to continuously engage customers and employees through talks and debates. We must make sure that we anticipate as many risks as possible, especially in a fast-paced industry where new risks continue to arise.

But we’re not solely internally focused. We accompany agents and customer experience leaders to embrace this transition; we acknowledge that our AI products can transform the workplace — and that they will affect the daily lives of agents. Because AI technology affects the workforce, we not only provide training for agents and customer experience leaders on how to use and understand our products, we also work on solutions for agents to accompany the transition in the workplace. This could include how to work alongside AI — and how to understand what AI does so you can ensure your agents stay “on the same side of the table” as your end customers.

While no company can claim to have found the perfect system to manage AI ethics policies, staying close to the needs of our customers, understanding the work of agents and adjusting the product development cycles internally will be core to our guidelines — and values — at Genesys.

To get more information on our AI Ethics efforts, read all the blogs in the series and join in the AI Ethics discussion.

Share: