It’s 8:00 AM and your flight was canceled overnight. You open the airline app, already bracing for friction. Instead, you’re greeted by a calm, articulate AI-powered virtual agent. It apologizes, explains the disruption and outlines your options. 

For a moment, that feels like progress. 

Except nothing actually changes. No ticket is issued. No refund is processed. You close the app knowing what caused the issue and you’re still waiting for it to be resolved. The problem isn’t empathy or intent. The system understood you perfectly. The problem is capability. 

Many AI-powered interactions are helpful up until something actually needs to be done. 

This is exactly what Genesys Cloud™ Agentic Virtual Agent, powered by large action models (LAMs), was built to address. Our AI agents won’t wait for scripts, prompts or human handoffs. They will be able to recognize what the customer is trying to accomplish, choose the right next steps and keep the work moving across systems without stepping outside established guardrails. 

The truth is most artificial intelligence (AI) agents in production today cannot carry work through to completion. That limitation rests in the technology underneath the experience, specifically the architectural difference between large language models (LLMs) and large action models:  

LLMs are designed to understand and explain. LAMs are designed to decide and do. 

The Execution Layer of Agentic AI 

Large language models transformed how people interact with machines by making conversations feel natural. They can understand messy questions, follow context and respond to reflect tone and intent. Within the customer experience (CX), this replaced rigid scripts with dialogue that adapted in real time and is far less mechanical. 

Trained on vast amounts of text, LLMs excel at predicting language sequences. Interpretation and expression are their defining strengths.  

They understand what someone is asking and can respond in a way that feels coherent and human. In customer experience, LLMs help make interactions more fluid and informed, but there are limitations to their contributions. 

That isn’t a flaw. It reflects how LLMs were designed. They reach their ceiling when tasks require workflows that span enterprise systems, policies and time.  

This is where large action models take over. 

LAMs extend conversational intelligence into execution. They are built to reason over real-world operations within approved APIs, governed workflows and policy-enforced capabilities that already exist inside the organization. Each operation has defined inputs, permissions and expected outcomes — giving the model a grounded environment to plan within. 

That control is intentional. It’s what helps make autonomy safe. 

Large action models don’t talk about what could happen. They focus on what should happen next and then keep going until the job is complete. Each step is tracked, controlled and easy to follow.  

This distinction is what separates AI-led platforms from AI-assisted systems. Conversations no longer end with “someone will follow up.” They conclude with finished work and customer resolutions. 

Introducing the Industry’s First Autonomous Agentic Virtual Agents  

The experience you encountered in that airline app isn’t unusual. It’s the expected outcome when conversational systems are asked to perform operational jobs. The virtual agent communicated clearly, yet left the burden on you to wait, retry or escalate. 

Genesys Cloud Agentic Virtual Agent is designed to take ownership of customer experiences and drive them to completion. These agents will know what they’re allowed to do, how to proceed and when human judgment is needed. 

Now, let’s revisit the canceled flight. Instead of listing rebooking options, Genesys Cloud Agentic Virtual Agent can autonomously authenticate you as the traveler, check availability, assign a new seat, apply a credit where policy allows, update the reservation and confirm the result — all within the same interaction. 

Now, you leave with certainty, not a recap. 

This is possible because these agentic virtual agents are powered by the Scaled Cognition APT-1 LAM, trained to operate inside real enterprise environments. They will reason against actual tools and policies rather than hypothetical workflows.  

Every action taken can be traceable and governed. Nothing is improvised or hidden behind a follow-up ticket.   

Native support for open standards like Agent-to-Agent (A2A) and the Model Context Protocol (MCP) will allow our AI agents to collaborate securely with one another, and across enterprise systems, without losing context or control.  

At Genesys, execution and governance are inseparable by design. Our agentic virtual agents operate within defined boundaries to enforce policy at runtime and escalate only when human expertise is required.  

This matters because customers don’t care how many teams or platforms sit behind an interaction. What they want is their issue resolved. Genesys Cloud Agentic Virtual Agent is built to protect that expectation by delivering a single, uninterrupted experience where progress is visible, and outcomes are clear, as work moves across multiple parts of the organization.  

Most importantly, the customer experience stops feeling like a series of apologies and explanations and becomes anticipatory through journeys shaped in advance instead of repaired retroactively.   

If your AI can explain the problem but can’t resolve it, it’s not done yet. With the industry’s first autonomous agentic virtual agents powered by large action models for enterprise CX, Genesys is taking customer experiences beyond conversation into trusted, outcome-driven action at enterprise scale. See it in action. Get a demo of Genesys Cloud Agentic Virtual Agent today. 

Genesys Cloud Agentic Virtual Agent powered by large action models is expected to be generally available globally in the first quarter of Genesys fiscal year 2027.