While LLMs (Large Language Models) address generic knowledge very well. Our whitepaper explains that it is the combination of LLMs with internal documents that delivers trustworthy outcomes.
This so-called RAG (Retrieval Augmented Generation) approach consists of a preparation phase whereby documents are uploaded, converted, segmented, vectorized and indexed.
Once a question is asked a retriever will select chunks (segments) that are closest to the question. Finally an LLM generates an answer based on these chunks.
![RAG schema](https://kairntech.com/wp-content/uploads/2024/07/RAG-schema.png)
Discover the extensive RAG customization options available in the Kairntech platform by filling below form and access our Whitepaper.