Your Genesys Blog Subscription has been confirmed!
Please add firstname.lastname@example.org to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Many companies are excited about the possibilities of applying artificial intelligence (AI) to their businesses. They hope to gain new insights into how their customers behave and what they can do increase revenue or develop new competitive offerings. However, using AI means consuming lots of data and, typically, that data includes personally identifiable information (PII). And PII carries with it the risk of a breach that could lead to fines, litigation, loss of reputation and loss of revenue. Most AI-based services reside in the cloud, which presents further challenges for companies looking to move their data to take advantage of new technologies.
For contact centers, the prime data for AI training includes transcripts of calls and further analysis of this data, such as sentiment or a combination of this data with other sources like order history, sales or interaction from other channels. While PII can be removed easily from self-contained fields, it’s harder to remove it when it’s mixed with other information, like in a transcript. Fortunately, new technologies are available to also help mitigate the risk of PII exposure.
From a consumer perspective (and we’re all consumers), a company breach of PII can have dire consequences. Information that gets into the wrong hands can lead to identify theft, embarrassment, property theft, compromised medical care, blackmail or even physical harm. It would be hard to trust a company that mishandled our information.
Tools and Culture to Safeguard PII
In implementing a technological solution to protect PII, it’s important to understand the potential sources of data breaches. The most infamous are large data breaches and hacks. In contact centers, other sources of PII might be in audio recordings or associated transcripts. Those with access to recordings or transcripts must be vetted carefully. A breach might occur when an agent copies down sensitive information and lastly, PII capture might come from the customer’s end — if someone eavesdrops on the conversation.
While enterprise security tools and policies help prevent larger leaks, training and the development of a culture of ownership can lessen softer sources of leaks. Examples of this include when an agent writes down information or doesn’t press the pause button completely when a customer provides payment information. For the areas in between, the best policy is to not store PII at all.
To accomplish this, after creating a transcript of a recording, companies can use named-entity recognition to detect suspected PII. Name entity recognition (NER), unlike a normal text search, can parse the text and identify patterns that look similar to what it was trained to identify. In more advanced implementations, NER understands when the PII is split among multiple interactions, is spelled out or is spoken in a way that’s different than when it’s written. Once the PII is found, it can be redacted from transcripts. It also can be removed from the corresponded audio recordings.
Once this is accomplished, the overall exposure an organization has to PII breaches is significantly reduced. Even if there is a data breach, the hackers might only be able to see redacted information. By using technologies such as NER, companies can benefit from having access to a treasure trove of customer interaction data without risking the privacy of their customers.
The all-in-one Genesys Cloud CX™ platform is a highly secure, agile and seamless customer experience solution that lets you add AI capabilities easily. To learn more, request a demo today.
Subscribe to our free newsletter and get blog updates in your inbox.