Mapping Your Organization’s Internal Stakeholders For AI Ethics Success

With all the developments around artificial intelligence (AI) technologies, and some of the scaremongering headlines about its effect on individual consumers, it might be easy to ignore some of the other issues that emerge at your organization. Don’t let these issues overshadow the unique needs and priorities of different stakeholders as they consider AI ethics.

Some stakeholders are external, yet they work so closely with internal partners and other groups (including employees), as well as the different subgroups therein. The needs and priorities of the engineering or product development teams likely differ from other departments like HR or legal. Let’s take a look at what each group should consider for AI ethics success.

Employees 

Genesys recently conducted research into employee and employer attitudes toward AI. We used those surveys to learn more about attitudes toward trust and the AI Ethics Guidelines. The results of these surveys found that most organizations don’t have any AI ethics guidelines at all. However, 40% admitted that they should — with more than half of the employees polled agreeing that they’d like to see their employer provide guidance. Our stance is that as long as you take a “listen and learn” approach, rather than believing that your 2019 guidelines will be appropriate in the future, then it’s good to get started with a basic framework.

Engineering and Product Development Teams 

One of the most interesting and affirming parts of our AI Ethics initiative at Genesys is the passion our engineers and product teams have for this topic. These teams are at the very heart of AI development — and the power that rests with them to program AI in an ethical way has spurred many energetic discussions. The Genesys product team has a strong voice in how our AI Ethics Guidelines are architected; these guidelines determine how they do the hands-on work. We recently introduced an AI Ethics training course and soon it will  form a compulsory part of developer on-boarding — and the engineers themselves drove the demand for this type of education.

Legal  

The advent of AI and its maturity is a great opportunity for legal teams to consider AI from both a legal and an ethical standpoint. The goal of organizations should be to define best practices and create industry-leading standards, in case governments — whose political agendas might conflict with the commercial imperatives of your business — impose those standards on them.

Many legal teams have recently tackled compliance with GDPR. And in the US, companies that do business in California are preparing for new privacy laws; California has a long history of setting standards that eventually become the norm across the US. As in many areas of technology, the law is struggling to keep pace with innovation. That means that AI might be judged by existing legal frameworks that don’t perfectly map to the unique challenges that AI implementations  present.

Human Resources 

HR teams are a key part of making AI ethics policies work across all functions. Just as importantly, they provide the feedback loop and temperature checks on how the organization feels about the subject. The HR team also acts as the gatekeeper for communicating news and updates to the whole organization. The better this team is at understanding AI ethics and how different stakeholders will likely react, the better they can guide the overall process. Different geographies will have different attitudes to how the organization is performing; helping all parties understand each other will make the whole thing go more smoothly.

Marketing, Sales and Customer Experience 

How organizations build confidence in their use of AI will also affect internal and external perceptions of the company’s brand. Reputations will be burnished when consumer and employee trust is built through the demonstration of ethical practices. Transparency into what you’re doing, how you’re doing it and the benefits you’ll see are key as the use of AI grows. Being coy or, worse, elusive will only build fear — and it could easily drive customers away. Being able to truly individualize a customer experience treads a complex path between persuasion and coercion, advice and responsibility. For sales teams, internal education must build a confident position for AI within the product offering, understanding the impact of its adoption and the challenges it presents.

Unions 

For some businesses, unions and their onsite members are key constituents and influencers. Building early understanding and buy-in from this stakeholder group is essential to avoid hurdles that might occur later.

One Italian bank told us that, before they introduced AI into their contact center, they had an education plan in place to keep their union reps on side. It’s important to demonstrate the benefits of AI when it interacts with employees. Of all stakeholder groups, this one requires the most careful management. The one-to-one association between the adoption of AI and job losses that many assume will occur, must be tackled effectively by a strategic plan that recognizes the differing contribution of talent and technology at every customer touchpoint.

Over the last year, there’s been an explosion of information available to help you track the subjects and learn more about how different organizations (and some governments) tackle the AI evolution.

AI ethics is a fascinating area that’s continually evolving as we understand the potential and the limits of the technology. It’s far from resolved — and organizational expectations in every part of the business should reflect pragmatic choices that AI engenders. You should also set best practice guidelines for evaluating the impact of an evolving AI landscape.

Stay up to date on our AI Ethics blog series and join the discussion on AI Ethics online.

Share: