Kairntech Question&Answer allows you to interact in natural language with your own content.
Finding answers specific question in complex documents or large collections of documents can be a cumbersome effort, despite powerful search engine technologies: Often users find themselves trying different combinations of AND and OR and exact phrase searches and so on. Still they often have to examine long lists of search results in the end, hoping that somewhere they contain the answer to the original question.
Large language models (LLMs for short, for which GPT is a popular example) have provided us all with new ways of searching for information: They allow among many other things asking natural language queries and receive direct natural language answers.
This is a major progress for many NLP scenarios. However, LLMs typically have been trained on large volumes of publicly available information and will not be able to answer specific question about your own internal content: Your documents, your emails, your notes are typically (and fortunately) not part of the corpora that GPT was trained on.
The approach that we describe here allows nevertheless to benefit from the impressive capabilities of LLMs: Making them available on your own documents, an approach often called RAG (retrieval-augmented generation).
A Q&A sample scenario
Let’s assume that a we want to process technical documentation such that when we have a question, we don’t need to perform traditional complex full text searches. Instead we want to ask the respective question directly and receive an answer.
For instance the documentation of the Nikon D5000 SLR camera specifies what kind of battery must be used for the remote shutter.
We see that a CR2025 type battery needs to be used. But this information is not easy to find. It is contained on page 221 of the 256 page long documentation. And not every user will know that the device is called “remote shutter”, some may call it a “remote control” or a “hand-held control” instead.
Using Kairntech Q&A, the user can ask the question in natural language and get the direct answer.
Note, that the answer comes with a reference to the precise location (here the reference “”) in the content from which this answer was generated. This allows the user to verify the answer by accessing the respective content via a mouse click on this reference.
Behind the scenes: Semantic analysis and embeddings
Kairntech Q&A analyses the imported documents behind the scenes semantically and stores the analysis results locally. Technically this semantic analysis comes down to computing an embedding vector for the document segments. Later when the user asks a question, the question is analysed in a similar way and potential answers are identified in the stored content. The set of retrieved potential segments containing the answer is then submitted to an LLM where the answer is generated.
This setup has many benefits:
- The respective content does not need to be submitted to the (potentially remote and proprietory) third party LLM. In many cases a user may hesitate to submit potentially confidential information to a third party such as GPT’s creators at OpenAI. The setup here allows to keep the content (the original documents as well as the computed embedding vectors) local.
- Even leaving questions of confidentiality aside, submitting large volumes of documents to a third party LLM may be very resource-intensive and costly. The setup outlined here keeps efforts and costs at a minimal level even for large volumes of content.
- The setup finally allows to generate answers for questions where a pure text-based search would not allow to identify the relevant content (e.g. “remote shutter” vs. “hand-held control”).
Available also via the API
As with all kinds of interactions, also this Q&A scenario is available via the Kairntech REST API and as such can be integrated into third party environments such as apps, document management systems etc.
Using a Kairntech Q&A project in order to allow for direct answers comes with many options that govern the system behavior: For instance selecting the type of document segments that will be analysed semantically, the type of embedding model used and many others. While Kairntech has selected reasonable defaults for many of these options, it makes sense to study in detail how the user can make advanced use of the Q&A functionality. More information about how a Kairntech Q&A project can be defined and customized and used can be found in the Kairntech documentation.