Executive summary
Genesys is committed to building responsible AI that enhances customer experiences while upholding privacy, fairness, transparency and security. This article outlines our ethical framework for AI development and deployment, which is guided by four foundational principles: Accountability, Security and Privacy, Transparency, and Fairness. These pillars guide how we build, govern, and evolve AI in ways that are responsible, trustworthy and align with customer values. Through robust governance, clear controls and ongoing improvement, we aim to ensure our AI systems are trustworthy, compliant and beneficial for all users.
Our commitment to ethical AI
At Genesys, we are committed to developing artificial intelligence (AI) responsibly — placing security, privacy, fairness and transparency at the core of every capability we bring to market. We recognise that AI, particularly generative and large language model-based features, presents an evolving set of opportunities and risks for our customers. As such, we prioritise customer trust, choice and control in building ethical AI systems.
Genesys does not use customer communications — such as audio, video, chat, screen recordings, attachments or metadata — to train shared or third-party AI models. Our AI systems are designed to protect customer content and help ensure that data remains within the customer’s control.
We also provide granular controls at the organisation, group and user levels to allow administrators and users to configure which AI capabilities are enabled and how they are used. For example, AI features such as summarisation or copilots can be selectively activated and offer in-product notifications to inform users when AI is engaged.
Through this commitment, Genesys ensures that ethical considerations are not an afterthought, but a part of the foundation for how we design, deploy and govern AI.
Introduction
Artificial Intelligence is at the heart of the Genesys Cloud™ platform, powering capabilities that enhance customer experience, optimise agent performance and orchestrate outcomes across channels. As AI becomes more deeply embedded in how we operate and serve our customers, we recognise the critical responsibility to enable our customers to design and deploy it ethically.
Accountability
AI systems must be thoughtfully governed and actively monitored to enable them to work as intended — and as expected. At Genesys, accountability is more than a compliance requirement; it’s a core value that shapes how we build, deploy and continuously improve AI across our platform. We take responsibility for both the technical performance and the real-world outcomes of our AI systems.
Defined roles and governance structures
Cross-functional teams, including Product Management, Engineering, Legal and Privacy, work together to establish accountability for the ethical use of AI. Roles and responsibilities are assigned for each step of the AI lifecycle — from data sourcing to post-launch monitoring.
Pre- and post-deployment reviews
New AI capabilities undergo risk assessments that evaluate potential ethical, legal and operational concerns. These assessments continue after release, with models and their impacts reviewed through monitoring, customer feedback and performance testing.
Human oversight and control
We embed human-in-the-loop mechanisms in systems like Agent Assist and Agent Copilot. These aim to ensure that AI recommendations support human agents rather than override them, particularly in high-impact scenarios like customer care or compliance-sensitive use cases.
Escalation and feedback channels
Genesys customers and users have clear paths to report concerns or unintended AI behaviour. Feedback loops are in place to enable responsible teams to evaluate and respond to emerging risks or misuse patterns, helping to maintain accountability over time.
Security and privacy
Security and privacy are essential to ethical AI. Genesys follows a privacy-by-design approach, embedding protective measures throughout the AI lifecycle. We prioritise minimising data exposure, ensuring control remains in the customer’s hands and aligning with evolving global data protection regulations.
Data minimisation and isolation
Genesys Cloud AI systems are designed to collect and use only the data necessary for the feature at hand. All customer instances are logically separated in our cloud platform, and we ensure to isolate data access pathways between customers.
Data protection by design
Genesys doesn’t use customer communications, such as audio, chat, screen recordings or attachments, to train shared or third-party foundation models. Any model customisation, such as prompt-tuning or fine-tuning, is limited to the customer’s environment and only under their control.
Anonymisation and access controls
Where data must be processed for AI capabilities, we apply anonymisation or masking. Model pipelines are protected by strict access controls and audit logs, helping to ensure only authorised users can access sensitive infrastructure or model artifacts.
Regulatory compliance and global standards
We build AI features to comply with international data protection regulations, including EU AI Act, GDPR, Health Insurance Portability and Accountability Act (HIPAA) and California Consumer Privacy Act (CCPA).
Transparency
Transparency allows users and administrators to better understand how AI works, what it can do and where its boundaries lie. We aim to demystify the technology, giving customers both insight and control over their AI-powered experiences.
Product cards and system disclosures
For each AI-powered product we make AI Product cards available to customers. These reports contain technical information on all AI systems included in the product, including purpose, AI models used, training data scope, supported languages, performance benchmarks and known limitations. These help customers evaluate whether an AI feature is suitable for their environment.
Explainability in user interfaces
AI-generated content — such as sentiment, predictive routing, predictive engagement — is accompanied by explanations and confidence scores where applicable. This enables agents and supervisors to make informed decisions based on AI recommendations, not blind trust.
Configurable behaviour and thresholds
Customers have access to configuration settings that shape AI behaviour. This includes options to set thresholds, define fallback logic, adjust prompt phrasing or toggle specific features to suit operational goals.
Transparent documentation and communication
Each AI feature is supported by documentation that outlines how it works, what data it uses and how it handles privacy. Customers can make informed choices about deploying AI based on their understanding of its function and implications.
Fairness
AI should work well for everyone, regardless of language, geography or demographic background. At Genesys, fairness means taking measurable steps to detect, prevent and mitigate bias in data, models and system outcomes.
Bias-aware data practices
Training data is reviewed for representational equity across languages, regions and demographics. Annotation practices are designed to minimise subjectivity and avoid injecting human bias into model training.
Consistent model performance across populations
We benchmark model performance across languages and dialects to ensure equity in AI quality. For all new capabilities, Tier 1 languages include: English (US), English (GB), English (Australia), Dutch (Netherlands), French (France), French (Canada), German (Germany), Hindi (India), Japanese (Japan), Portuguese (Brazil), Spanish (Spain), Spanish (US/LATAM), Arabic (UAE), German (Switzerland) and Korean (Korea). Performance gaps are addressed through retraining, error analysis and targeted improvements.
Avoiding reinforcement of historical bias
Genesys systems are built to avoid automation bias and feedback loops that may reinforce existing patterns. We enable human decision-makers to review, override or refine AI-driven actions when appropriate.
Inclusive language and region support
We prioritise the expansion of capabilities to underserved languages and markets. This includes expanding language coverage, developing culturally appropriate AI behaviours and testing with a global lens.
Governance and continuous improvement
Ethical AI is not a one-time achievement; it’s an ongoing process that must adapt to new technologies, regulations and customer needs. At Genesys, we view governance and continuous improvement as essential to keeping our AI systems trustworthy and aligned with our values over time.
Cross-functional AI oversight
AI governance at Genesys is managed through collaboration between Product Management, AI Research, Legal, Privacy, Security and Customer Experience teams. These teams establish the ethical review criteria for AI features, set risk thresholds and help to ensure practices evolve alongside customer feedback and industry standards.
Risk management across the lifecycle
We conduct risk and impact assessments throughout the AI lifecycle — from ideation to deployment and monitoring. These reviews consider factors such as potential bias, data exposure, unintended outcomes and user and environmental impacts. Models with higher potential risk are subject to additional oversight and testing.
Performance monitoring and model updates
After deployment, AI models are monitored for accuracy, fairness and unintended consequences. When performance degradation or bias is detected, updates are triggered using new data or revised tuning techniques. Continuous evaluation helps to ensure models stay aligned with both system goals and ethical principles.
Customer feedback integration
Feedback from customers plays a central role in improving AI systems. Genesys encourages customers to report concerns, suggest improvements and request configuration enhancements. These inputs are reviewed regularly and drive roadmap prioritisation for responsible AI evolution.
Future-ready practices
As new AI technologies emerge — especially in generative AI — we are actively updating our frameworks to include them under the same governance principles. From large language model (LLM) prompt safety to hallucination mitigation, we’re committed to proactively managing the next generation of AI capabilities.