What is RAG?
Retrieval Augmented Generation is a way to make AI models more personalized
Last updated: March 3, 2025
TL;DR
Retrieval Augmented Generation (RAG) is a way to make LLMs like GPT-4 more accurate and personalized to your specific data.
-
LLMs are powerful as hell, but they’re also generic: they’re trained on all data on the internet ever!
-
RAG helps you get more personalized responses tailored to your data by embedding your data in your model prompts
-
RAG relies on the model’s context window, which is how much data in can take in a prompt
-
Today’s RAG pipelines are pretty complex and rely on embedding models and vector databases
Alongside old school fine tuning, RAG is becoming the standard way to get better, more personalized results out of state of the art LLMs.
Back to the future: training models
The funny thing about RAG is that the basic concept has been around for as long as machine learning has. Long time readers will recall that back in the day, I studied Data Science in undergrad. “Old School” machine learning, before everyone was calling it AI, was entirely predicated on training a new model for every problem.
How old school ML worked: custom models
Imagine you’re a Data Scientist tasked with understanding and predicting customer churn (your employer has a big churn problem). Your goal is to be able to predict a brand new customer’s chances of churning, so your marketing team can ...