Bloomberg research: RAG LLMs may be less safe than you think
Retrieval-Augmented Generation, or RAG, has been hailed as a way to make large language models more reliable by grounding their answers in real documents. The logic sounds airtight: give a model curat...