Within Bouygues Telecom, the mission of Innolab Tech is to imagine and design the services and products of tomorrow. The arrival of generative artificial intelligence has naturally opened up new innovative use cases intended for end customers, in-store advisors, or all of the company’s employees. Bouygues Telecom has thus developed “Ensemble ce soir”, the new TV service that allows you to find the ideal content to watch with your family in less than 5 minutes. This content recommendation service based on generative AI was designed in collaboration with Amazon Web Services and with the support of Devoteam.
In this article, Laurent Sauvage, head of the Innovation Lab and Thomas Foucher, AI Engineer, answer our questions about the realisation of this innovative project and the collaboration with Devoteam experts on AI subjects.
What are the laboratory’s missions?
Laurent: InnoLab Tech is a team of engineers whose role is to offer tomorrow’s products and services. Theoretically, our mission is to set up the prototype and then hand it over to another team for industrialisation. However, the scope is evolving, and we sometimes put projects into production to shorten the time to market.
How do you design the services of the future?
Laurent: We pay attention to technology watch. We are interested in each new development, evaluating its potential for our products and services, whether they are intended for internal use or the general public. When Chat GPT was released, we looked very closely at the possibilities offered by LLMs. Unlike traditional AI projects, which require a large volume of labelled data, a team of specialists and a lot of time for a sometimes uncertain result, LLMs allow us to respond quickly to many use cases because they are already trained. This is how we came up with the “Ensemble ce soir” service.
How did you choose to design a service based on generative AI?
Thomas: Initially, our thinking was to find out how to exploit generative AI in TV with a service intended for the general public. AWS supported us in this research, particularly in the infrastructure part, which allowed us to focus on AI topics.
What use case does this service meet?
Laurent: It aims to solve a recurring problem in homes. With the current wealth of SVOD catalogues, how can you find video content (film, documentary, etc.) that appeals to all family members without spending too much time on it? The idea is to provide a tool that reduces the time spent finding content that suits everyone and achieves a result rather than ending up in a family argument! On average, a family takes more than 20 minutes to choose content.
“Ensemble ce soir” makes this research enjoyable and allows us to achieve a result in just a few minutes.
How does the service work?
Laurent: An application on the box allows you to launch the service and chat with an AI differently. You can use the service with your TV remote control and answer questions from the AI that evaluates the desires of the different family members and their current moods. With the remote control, each person chooses their answer from predefined choices. The remote control is a limited interface, but it allows to retrieve usable answers.
The actual user experience begins when the user flashes a QR code. They can chat freely with the AI via their smartphone and express their desires in natural language. The AI then considers all the survey responses and results and gives three reasoned recommendations. Providing these recommendations as relevant as possible is a real challenge, and we are progressing daily.
What technical environment did you choose?
Laurent: We conducted a benchmark of the different LLMs available, and we chose Claude 3 via Amazon Bedrock. It is a model with an excellent cost/effectiveness ratio, which, thanks to the RAG, allows us to integrate the catalogues of the different publishers to arrive at recommendations. The application provides a justification; it explains why it recommends this content: it does not summarise the film but describes how it meets the different requests of the family.
Thomas: The service works with an Android app and a web app that talks through API Gateway. The API interfaces with the AI system via Lambdas. We use Amazon Bedrock, Haiku by Claude 3, and the Knowledge Base system to do RAG.
What did using a managed service like Amazon Bedrock bring you?
Thomas: Using Amazon Bedrock saved us time when we needed to test very quickly. Bedrock made our lives easier and allowed us to accelerate the release of the solution.
What were the main challenges of this project?
Thomas: The main difficulty is to succeed in making an AI generative application discuss with the recommendation system: how to condense the information collected from users in several forms and make it usable by the recommendation system?
Generative AI understands the language very well, and we then try to match the users’ expressed needs with the content base. Note that if the users do not necessarily agree on their desires, we can potentially have a request that is difficult to respond to. It’s the heart of the subject, and we are still working to improve it.
What were the challenges in terms of user interface?
Thomas: A lot of thought went into making it simple to use and getting around the limitations of the TV remote control. AI becomes interesting when you can enter free text. However, we didn’t want to go through an application to download. To get around this problem, we use a QR Code, which streamlines the process for the user. This also involves constraints; a web client is necessarily more limited than an application.
Where are you today?
Thomas: We showed the first functional prototype at the Vivatech show in May 2024. We then made it into a real product, improved the recommendations part, and, after an initial internal test, conducted a second phase of testing with a sample of customers in September. And we are already thinking about adding new features!
What did you learn from this project?
Thomas: Technically, it’s a relatively new topic, and we had to be inventive to make the interface between the AI and the engine work. It was a very interesting work. We implemented many techniques to improve the search in the database. When we talk about AI, the general public expects it to be “magical,” so expectations are high! We hope that users will be amazed by the result.
We were able to rely on Devoteam’s AI expertise to meet this challenge and develop this new service in less than three months.
How did Devoteam support you in this project?
Laurent: Devoteam has been supporting us on this type of innovative subject since we launched our program around Generative AI. We were able to rely on their AI expertise to take up this challenge and develop this new service in less than three months.
How did you divide the tasks between the internal team and Devoteam?
Thomas: Devoteam experts are fully integrated into the development, and we do not differentiate between Devoteam and the internal team in this project. The separation of tasks is done according to the desires of each and what is more critical at a given moment. We are in a trust partnership with Devoteam that has lasted for several years. We have advanced on the subjects simultaneously and built together our overall knowledge of Generative AI. We are, therefore, wholly aligned in our understanding of the subject, the objectives, and the way of working to achieve them.
What was the added value of this support?
Laurent: Working with a partner with advanced mastery of the Cloud environment, particularly AWS, and strong expertise in AI and RAG topics is precious. This combination of Cloud and AI expertise makes for very successful support.
The expertise provided by Devoteam helps us build scalable and robust architectures, go into production and therefore accelerate the time to market.
What other AI topics are you working on with Devoteam?
Laurent: We have developed Lucia, a ChatGPT for internal use that guarantees the confidentiality of exchanges. To do this, we use OpenAI APIs in a secure way with dedicated Cloud hosting. It is a tool widely used internally and provides daily services to our employees.
In July, we also launched a translation service for in-store advisors. It stands out for its ease of use: Each interlocutor discusses in their language on their smartphone. Here, too, you launch the application with a flash code, without installation, and you join the discussion orally or in writing in a very simple and natural way.
To conclude
Laurent: Our initial mission was to go all the way to prototyping. Now, we regularly take our projects to industrialisation. The expertise provided by Devoteam helps us build scalable and robust architectures, go into production, and, therefore, accelerate the time to market.
Your Success Starts Here
Partner with Devoteam to leverage award-winning tech expertise, agile execution, and a culture of continuous learning.