Your Genesys Blog Subscription has been confirmed!
Please add email@example.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Years ago, any banks that deployed bots did so with caution. Then, bots typically were given a very narrow set of tasks. And still, most customer inquiries went directly to client service agents. Bots were used to answer basic questions or fulfill simple intents.
Today, banks are more ambitious with bots — and the market supports that ambition. An enterprise-wide program that supercharges automation — using natural language understanding (NLU) and bots — will take years to fully realize. But the potential results are worth it for maximizing value across the enterprise.
Here are five areas of focus for bots that are unique to the financial services industry (FSI).
Unlike other verticals, the banking industry must contend with a wide variety and a high volume of possible intents because that industry often includes various lines of business. Think of all the tasks in one financial institution: retail banking, checking accounts, card services, mortgages or wealth management.
Each of these lines of business has an entire library of intents. When you’re building bots, for example, your library might be 10 times as large as a bot serving a retail chain. In retail, customers are usually looking for a product, making a purchase or returning an item. Even if the retailer sells 1,000 products, all of those products share the same process. There are fewer distinct intents.
Financial services are more complex. The intents associated with your mortgage are different from life insurance, wealth management or card services. For example, you might want to dispute a charge on your credit card or refinance your mortgage. Each of those tasks has unique intents throughout the process.
This can be daunting, but there are ways to make this easier to manage. Many technology vendors that specialize in financial services have built up ready-to-use libraries of intents and utterances. Also, these same vendors might have pre-built integrations into back-end systems banks often use. But this doesn’t mean engaging with a vendor that specializes in financial service bots will solve all potential problems.
Larger banks often support clientele in many languages; and different geographic regions might have a banking-specific vernacular that’s not universal. Proper bot tuning, quality assurance and usability testing remain critical to success. When bots deliver a disjointed client experience, it’s often because the company has rushed this crucial training and quality assurance stage.
Identifying intents and filling slots is only half the battle. And sometimes, it’s the easiest half. This is particularly true when you want bots to tie into back-end systems for intent fulfilment. Imagine your customer wants to transfer funds from a checking account to a savings account.
Identifying that the customer wants to move a specific amount of money — and all the details surrounding this transfer — satisfies the challenges of intent recognition and slot-filling. But that’s not where the bot’s task ends. It must actually fulfill that request. That means integration with FSI back-end systems is required to execute the intent. And a key challenge is ensuring each of those system interfaces adhere to security compliance and communication protocols.
Simplifying the process can solve for that. Some companies have consolidated interfaces with many back-end systems, essentially becoming a communications broker. This means your bot might not need to talk to 10 different systems.
Although many start their journey into bots with FAQ or concierge bots, transactional bots pose the real challenge because banks have higher standards for security. And when it comes to the productionalization of these transactional bots, there’s no quick fix. There are, however, some best practices.
The first step is to divide all intents into those that require Identification and Verification (ID/V) and those that don’t. This will clarify the security profile you need to adhere to within your bot ecosystem.
The second step is to categorize the types of information that will fill slots or that the bots will deliver, determining what’s required from the perspective of PCI compliance and data-privacy regulations.
And third is to identify all the systems you’ll leverage for intent fulfillment — and categorize each from a security/risk perspective.
Gathering this information ahead of time prepares you for the necessary discussions with your security and compliance teams. You want these teams to be partners in this endeavor; they should also be the ultimate authority about what’s allowed. Do your homework and begin the consultation process with them before developing any ambitious plans.
Banks have long used complex models and data science in their business operations. Model risk governance grew out of this many years ago — when banks first started to use artificial intelligence (AI) algorithms for risk assessment, such as if the bank should give you a loan. If the algorithms didn’t work properly, there would be unintended consequences that could put banks in jeopardy. Now, model risk governance is a formal process with strict gate control. When an AI algorithm is being proposed — any type of AI tool or machine-learning algorithm — banks must go through a very detailed and lengthy process to protect themselves.
For example, if you have a project you want to complete in six months, talk to your in-house model risk governance team as early as possible. And ask them how long their process takes. You should build this into your project timeline.
Model risk governance wasn’t originally intended to be about bots. But bots and all services that use AI models have been caught up in it. If a process or system is making calculations, forecasting or analyzing, it could be classified as a model.
In the end, you’re responsible for navigating this process — and having all required information available for the model risk governance team. This information can include tuning, testing and quality assurance processes; data source information; and data cleansing processes; as well as detailed processes on how to eliminate the risk of unfair or unethical bias within the model.
Considering all the challenges, you might wonder where to begin and how to move toward a sophisticated end state.
It’s best to start with business units that offer the most opportunity for uplift. For example, many large banks have an insurance division that often has less advanced technology. Adding a bot that simply shares basic information or routes interactions more effectively could be valuable in streamlining the customer experience and freeing agents from mundane tasks. This type of simple FAQ bot isn’t transactional; it helps customers navigate through the frequently asked questions in a much more sophisticated way. In addition, the bot can maintain valuable context to use downstream when escalation to a human agent is required.
In parallel, plan on more ambitious, transactional bots in areas where there’s lower risk and higher success rates. This ensures success as you move into the domain of transactional bots. Once “Phase 2” is complete, you’ll garner the experience necessary to better evaluate bot deployment for more challenging or higher risk use cases.
The financial services industry is complicated; it’s extremely regulated and can be extremely conservative and risk-averse. However, the business benefits of automation using bots can be significant. Early adopters have proven this and their success is prompting more FSIs to dive in as well. But real success depends on having a sound strategy that addresses industry-specific challenges.
Subscribe to our free newsletter and get blog updates in your inbox.