While LLMs (Large Language Models) address generic knowledge very well. Our whitepaper explains that it is the combination of LLMs with internal documents that delivers trustworthy outcomes.
This so-called RAG (Retrieval Augmented Generation) approach consists of a preparation phase whereby documents are uploaded, converted, segmented, vectorized and indexed.
Once a question is asked a retriever will select chunks (segments) that are closest to the question. Finally an LLM generates an answer based on these chunks.
Discover the extensive RAG customization options available in the Kairntech platform by filling below form and access our Whitepaper.