We are very proud to present our latest software release (code name “Manta”). This release allows customers to create business value even faster with exciting new components (RAG, Whisper for Speech-to-text conversion…) and the latest Large Language Models (GPT, Llama2, Mistral…).
Fast prototyping
The implementation of the Retrieval Augmented Generation (RAG) framework in the platform allows you to create a project, upload documents and immediately ask questions. The answers obtained from documents contain links to the source. Searching for information, particularly internal documents, will never be the same again!
Experiment with an ever-growing number of pre-packaged components all the way through the RAG value chain. From document conversion, segmentation, vectorization all the way to answer generation.
The processing view shows it all:
We also built a highly convenient view to compare retriever components (search method, vectorizer model) and answers (the different LLMs).
Powerful customization
The retriever is a key component of RAG implementation. Adding more context to segments, fine-tune language models on a specific business domain are examples that are instrumental to improve accuracy.
Quality benchmarking
We also introduce the possibility to define a standard set of questions and answers against which any new question / answer pair can be benchmarked.
It is all about testing & experimenting to get the most out of RAG and obtain results that can be trusted.
For more details on customization strategies, please consult our white paper (see link below).
Seamless deployment
While you may find it relatively easy to prototype a RAG project yourself, industrializiation of projects is quite another challenge.
Directly embed complex AI pipelines with a single line of code through our powerful API: it just makes available a new web service in your application.
See also: our Whitepaper and RAG information