The german DGI (Association for Information and Knowledge) organized a webinar on June 26 on Kairntech RAG. The objective: analyse documents with Large Language Models following the retrieval-augmented generation (RAG) approach.
Webinars are a fine tool to comfortably convene an audience and deliver a presentation. But often they tend to do a lot of frontal teaching with many colorful slides and then a short discussion at the end.
Last week’s Kairntech webinar chose a different approach. Stefan Geißler invited the audience to a hands-on session where everybody could try out the effects of LLMs on their own documents. An online instance of Kairntech’s RAG system was made available to all participants.
Natural language answer, not just a list of hits
RAG is one of the most popular applications of LLMs these days because it delivers the capabilities of LLMs on a user’s own content. That is: content the LLM has never seen when it was trained. Kairntech had prepared user accounts to its easy-to-use software. During the webinar the participants imported content and then asked natural language questions to receive natural language answers.
Figure 1: Rather than delivering a list of potential hits as in a conventional search engine,a RAG system returns a direct answer to the user’s question.
More on the Kairntech implementation of RAG can be found here.
Relevance for – among others – librairies
The participants came from a wide selection of large scientific and public libraries. This is a sector for which the possibilities offered by the adoption of LLMs evidently are of great relevance. Following the webinar, all users now can continue to use the system for their own experiments with their own data for an extended test period.
We would like to thank Dr. Margarita Reibel-Felten of the DGI for the smooth preparation and delivery of the webinar. If you are interested to give Kairntech RAG a try, you don’t have to wait for the next Webinar. Drop us a line and we will ensure you get a free trial access to the system.