Skip to content

The Dual CX Shift: When Both Sides Have AI

We built an AI voice agent. We pointed it at a real contact centre — a major UK retailer's customer service line. The agent called, navigated the IVR, waited on hold, reached a human, passed security verification by providing the customer's name, order number, and delivery address. Then it explained the complaint: a missing item from a clothing order. The human agent processed the refund.

We did it three times. Three different calls, three different agents. None of them knew they were talking to a machine.

This isn't a demo. It happened. And it changes everything about how customer service needs to be designed.

The fifty-year assumption

Since the first call centres appeared in the 1970s, customer experience has operated on a single structural assumption: the company controls the technology, and the customer navigates it. The company decides the IVR menu. The company chooses the chatbot script. The company sets the hold music and the operating hours and the callback rules.

The customer's job is to show up, comply with the process, and hope for resolution.

That assumption is breaking.

Four modes, not two

Every customer-company interaction now sits in one of four modes:

Mode 1: Human ↔ Human. Traditional service. A person calls, a person answers. This is where most operational budget still sits, and it's shrinking as a share of total interactions every quarter.

Mode 2: Human ↔ Brand AI. The model most companies are investing in right now. Chatbots, IVR, self-service portals. The customer is still human; the company has automated its side. The risk: most of this AI is designed for containment, not resolution.

Mode 3: Customer AI ↔ Human. The customer's AI contacts the company's human agent. This is the test we ran. The customer's assistant calls in, navigates the process, passes verification, explains the issue, and gets it resolved. The human agent never knows. Almost nobody is designing for this.

Mode 4: Customer AI ↔ Brand AI. Two AI systems negotiate directly. The customer's agent contacts the company's agent. They exchange data, verify identity through machine-readable protocols, and resolve the issue without a human on either side. Near zero readiness across the industry.

Most organisations are designing for Mode 1 and Mode 2. Some of the more ambitious ones are trying to do Mode 2 well. Almost none are thinking about Mode 3. And Mode 4 is treated as science fiction.

It's not science fiction. It's engineering.

What breaks

When a customer's AI calls your contact centre, your existing design assumptions fail in specific, predictable ways.

Security verification was designed to test whether a human knows their own details. An AI always knows. It doesn't fumble the postcode. It doesn't forget the order number. The verification step that's meant to be a gate becomes a rubber stamp.

Hold time calculations assume the caller will hang up. An AI won't. It will wait for four hours if it needs to.

Agent scripts are written for human conversational patterns — hesitation, emotion, context drift. An AI caller is precise, patient, and relentless. It restates the issue identically each time. It doesn't get frustrated, but it also doesn't respond to empathy statements the way your quality framework assumes.

Escalation paths rely on the customer raising their voice, asking for a manager, or expressing dissatisfaction. An AI doesn't do any of that. It just keeps requesting resolution in the same measured tone.

The gap nobody is filling

I've been in CX for over two decades. I've built operations at companies where the phone rang hundreds of thousands of times a month. And I can tell you: the industry conversation is almost entirely about Mode 2. How do we automate the company's side? How do we build better chatbots? How do we deflect more contacts?

Those are reasonable questions. But they're not the right questions any more. The right question is: what happens when the customer automates too?

Personal AI assistants are already in market. Apple Intelligence, Google's Gemini, standalone agents from startups you haven't heard of yet. Task-completion tools that can handle service interactions on behalf of consumers. Super-apps that bundle commerce, support, and AI into a single interface.

The consumer side is catching up fast. And when it does, the companies that only designed for Mode 2 will find their entire service architecture is built on an assumption that no longer holds.

So what now?

We built Neos Wave to work across all four modes. Not because we predicted the future perfectly — we didn't. We built it because we started from a different premise: service should work for both sides of the conversation, whatever form those sides take.

That test we ran? Three calls, three agents, none knew. That's not a stunt. That's a signal.

The question isn't whether your customers will send their AI to deal with you. It's whether you'll be ready when they do.