RAG is a technology pattern that employs large language model (LLM) AI to retrieve facts from a specific knowledge base.
The benefit of the RAG pattern is that it scopes the information available to the LLM to increase relevance and accuracy, and to reduce irrelevant or hallucinated outputs.
This approach has been used to scope the knowledge used for LLMs to answer questions and perform tasks, and to thereby ground them in relevant, accurate information. This reduces the scope for irrelevant or hallucinated outputs.
The team at Data Language has been looking into the RAG stack approach in R&D, and also in collaboration with customers.