Vancelian is a mobile wealth management application. Whether you’re investing or placing money, Vancelian offers savings and investment solutions using cutting-edge technologies such as blockchain and AI. The application is easy to use and offers a trading robot to automate your investment.
To meet the growth of its customer base, Vancelian launched a chatbot project based on generative AI. More customers imply more numerous and frequent requests to the support team, with a potential risk of a bottleneck in the processing of questions. The chatbot, therefore, aims to assist the support team in answering the most recurring questions, thus freeing time to handle more complex requests. Devoteam experts supported Vancelian in this project based on Amazon Bedrock and RAG.
In this article, Remi Gelibert, CTO, and Alexis Segura, Backend & Cloud Architect, answer our questions about this innovative tool which facilitates customer relations.
In what context did you request Devoteam’s support?
Our customer base has grown significantly, leading to increased requests to our support department. Based in France, the support team often receives simple and recurring questions. We have developed a chatbot that generates answers based on our FAQ and internal documentation to facilitate their processing.
For this task, we asked Devoteam to support us on the Gen AI part of this project, which uses Amazon Bedrock and RAG (Retrieval Augmented Generation) for documentation ingestion.
How did you collaborate with Devoteam?
Mathieu (Devoteam) led a POC upstream of the project.
We then formed a mixed team: Alexis (Vancelian), in charge of the part related to the Zendesk support tool and the interface between the response API and Zendesk, Damien and Alexandre (Devoteam) on the technical GenAI part and the creation of the API. The development was organised in sprints, with Alexandre as lead and Damien in ML Ops.
What is your technical environment?
All our infrastructure and workload are on AWS Cloud. Bedrock, being an AWS native service, was the most obvious choice for us to meet our needs.
We used serverless tools, such as API Gateway and Lambda functions, to automatically ensure the ingestion and vectorisation of documents and the exchanges between Amazon Bedrock and Zendesk.
How did the chatbot learn from the documentation and FAQ?
Devoteam consultants used the RAG pattern to regularly and automatically ingest the entire documentary database and the support FAQ. This database was vectorised to serve as a knowledge base for the Claude 3 model so that it can provide appropriate, coherent and relevant answers.
This is an essential step because we are in a highly regulated field where we cannot use specific terms, which requires a lot of vigilance in formulating the answers.
What is the workflow followed by a request?
A request to the support generates a ticket that is sent to the chatbot via API Gateway and Lambda. It provides its response via the API, which is posted in a desk, providing the support agent with initial response elements.
The agent then evaluates the relevance of the response and its compliance with regulatory obligations and corrects it if necessary. Following this, the vector base is rebuilt with these corrections so that the model has a knowledge base for continuous improvement.
What challenges did you encounter during development?
The most complex part of GenAI was integrating the FAQ into 3 languages (English, French, and Italian) and vectorizing it to make it usable in an automated and regular way. Devoteam, therefore, developed lambdas that monitor the FAQ’s update on an S3 bucket, automate the RAG and prompt engineering to promote certain behaviors and prohibit others.
What added value did the Devoteam’s support bring you?
Since we don’t have any GenAI skills internally, we wanted to be supported by specialists in the subject. We also wanted a partner who could take charge of project management and the support was perfect here, and the sprints were very well organised. We just had to position ourselves as a stakeholder and take on a few technical tasks.
Everything was very well framed, with a lot of skill. It was also very appreciable to have a partner who could move forward quickly: this is the first time we have worked with a technical partner who supports us at the pace we want.
What are the next steps?
We will now see how the rating of the chatbot’s responses evolves to improve prompt engineering. Ultimately, our goal is for the chatbot to be able to be directly questioned by site users, answer general questions about the application, and answer specific requests. For example, it can search the customer database for information such as the date their credit card was sent.
We also monitor new features in Amazon Bedrock, such as Guardrails, which allows you to apply model-specific guardrails, or Knowledge Bases, which could help simplify the RAG part.