AI Ethics: The Conduit for Safer AI

You might be familiar with the phrase, “Data is the new oil.” You might even be familiar with the updated slogan, “If data is the new oil, then AI is the new electricity.” Just as electricity brought about myriad changes for our society today, artificial intelligence (AI) could be the next great innovation of our time. But many still wonder how we, as a society, ensure that this electricity is used in a responsible way. This is where AI ethics comes into play.

Media mentions on AI and ethics have doubled in the past years — with over 90% of those mentions indicating positive or neutral sentiments.  AI ethics attempts to tackle the conversation around data use regarding privacy, transparency, bias and discrimination — as well as a lack of governance and accountability.

This is why Genesys developed the five AI ethics pillars in 2018. And it’s why we continue to push the envelope to better understand this challenging issue that has differing opinions for resolution. Many experts agree that there’s a need for governing bodies to protect the data of users and ensure this data is being gathered and used ethically — in both public and private sectors. Just like other transformative technologies, such as genetic engineering and nuclear technology, regulation is key to ensure safety and stability.

As more artificial intelligence uses cases are developed globally, the need for data will only increase. And with it, the desire to gather that data also increases. But the risk that comes with assembling this information lies in the potential disruption and possibility for discrimination. There needs to be a way to understand how a model evolves over time, rather than a black box without human intervention. It’s why multiple countries and companies have begun to prepare for the future and minimize disruption from early AI advancements as well as the upcoming artificial intelligence explosion.

Earlier this year, for example, the Pentagon created broad principles outlining ethical use of artificial intelligence by the military. The White House proposed AI regulatory principles for the use of AI in the private sector. Governments are becoming more aware that, for AI to be a successful part of our daily lives, there needs to be some form of regulation —  not a laissez-faire approach.

But government entities aren’t the only ones with a responsibility to regulate and govern AI. The private sector understands the need for AI ethics, as shown by the creation of the Partnership of AI, a group of non-profit and for-profit companies dedicated to serving as a “uniting force for good in the AI ecosystem.”

Genesys is contributing to the conversation as well, with our AI ethics pillars and AI Ethics Committee. We also require AI ethics training for all of our engineers. And we’re taking actionable steps toward improving the quality of our AI ethics by adding questions into our Product Requirements Documents. AI ethics is at the forefront of product design. This means we can ensure the ethical use of data and, ultimately, further build our relationships of trust and loyalty with customers. We’re also creating courses for agents who use the Genesys Cloud™ solution, empowering them to fully understand the impact of artificial intelligence on their jobs and daily tasks.

Most industries will feel the effects of this technology in some way. Genesys understands the responsibility and importance of minimizing disruption and discrimination in artificial intelligence without diminishing the benefits that its capabilities create. With multiple governing bodies and companies joining the ethics conversation, ideally this new age “electricity” will be safe to use and lead us into the next era.

To stay up to date, follow the Genesys AI ethics blog series and join the conversation.

Share: