New Release (Manta)

We are very proud to present our latest release (code name “Manta”) allowing you to create business value even faster with exciting new components (RAG, Whisper for Speech-to-text conversion…) and the latest Large Language Models (GPT, Llama2, Mistral…).

Fast prototyping for question-answering

The implementation of Retrieval Augmented Generation (RAG) framework in the platform allows you to create a project, upload documents and immediately ask questions. The answers obtained from your documents will contain links to the source. Searching for information, particularly internal documents, will never be the same again!

Experiment with an ever-growing number of pre-packaged components all the way through the RAG value chain: from document conversion, segmentation, vectorization to answer generation. The processing view shows it all:

We also built a highly convenient view where you can visually compare retriever components (search method, vectorizer model) and the generated answer (the different LLMs).

Powerful customization to address your needs

The retriever is a key component of RAG implementation. Adding more context to segments, fine-tune language models on a specific business domain are examples that are instrumental to improve accuracy.

Quality benchmarking to ensure superior results

We also introduce the possibility to define a standard set of questions and answers against which any new question / answer pair can be benchmarked.

It is all about testing & experimenting to get the most out of RAG and obtain results that can be trusted.

For more details on customization strategies, please consult our white paper (see link below).

Seamless deployment to industrialize your project

While you may find it relatively easy to prototype a RAG project yourself, industrializiation of projects is quite another challenge.

Directly embed complex AI pipelines with a single line of code through our powerful API: it just makes available a new web service in your application.

Useful links