Skip to content

Enhancing Support Agent Efficiency with AI-Generated Content

The Snapshot

1

Innovative AI solution implemented to assist support agents in providing faster and more accurate responses to customer inquiries.

2

AI system leverages RAG technique and GenAI technologies, allowing agents to choose from pre-generated content, significantly improving support efficiency.

3

Hybrid architecture balances cost and effectiveness, with lessons learned highlighting the importance of context understanding in AI-generated responses.

The Challenge

Customers frequently raise questions about their telecom operator or mobile phone plans, necessitating the creation of support tickets in Zendesk. However, the process of understanding user requests, gathering context, drafting responses, and interacting with customers can be time-consuming for support agents.

In this case study, we explore how an innovative AI solution was implemented to assist support agents in responding to customer inquiries more efficiently within an integrated customer service platform, similar to Zendesk. The AI system suggests responses to support agents, empowering them to select the most appropriate answer for the customer’s question.

The Solution

The proposed solution leverages cutting-edge GenAI technologies to provide support agents with pre-generated content to aid users. The application initiates when a new ticket is generated on the customer’s Zendesk platform. It employs a Retrieval Augmented Generation (RAG) technique, utilizing frequently asked questions (FAQ) documents from the telecom operator to find the most suitable response to the customer’s query.

A Vector store, relying on RDS (Relational Database Service), is used to store vectorial representations of the requests. The application, based on ECS (Elastic Container Service), compares these representations with the documents stored in the index, retrieving the best matches (i.e., documents) to be included in the response.

LangChain orchestrates module/chain steps such as embedding, vector stores, LLM (Large Language Model) calls, text transformation, etc. The primary modules rephrase the customer’s question, identify relevant FAQ documents using RAG, and then initiate a call to the LLM. The LLM receives the context documents, the original question, the relevant part of the FAQ, and instructions indicating that the caller is a customer service agent, requesting a clear, non-fictional response in French, with proper formatting and a polite closing sentence.

Subsequently, the modules filter the response to remove contact information and employ a fact-checking technique to ensure the answer aligns with the retrieved FAQ documents. Additional transformation modules are used to refine the customer’s question, concatenate titles, and include greeting sentences without invoking the LLM.

The Better Change

The deployed application has significantly improved the efficiency of the customer support service.

TCO Analysis Performed: No cost analysis was conducted for this project, as it falls under an internal innovation lab within the customer’s IT service. Decisions regarding the selection of LLMs are made based on a pragmatic evaluation of cost and performance.

Support agents now have access to a bot that generates well-informed and contextually relevant responses, saving them time and helping end-users receive accurate assistance.

Lessons Learned: Developing an AI system that generates accurate content for support agents requires a deep understanding of context. Interpreting the true meaning behind user requests, especially when it is hidden, can be challenging. Developing such an application involves iterative processes and the comparison of various LLMs, as results may vary significantly. It was observed that larger LLMs tend to perform better but come at a higher cost. As a result, a hybrid architecture is employed, combining different AI solutions to balance cost and effectiveness, including utilising AWS Bedrock.