본문 바로가기

NLP3

Retrieval-Augmented Generation for Large Language Models: A Survey Retrieval-Augmented Generation for Large Language Models: A SurveyLarge Language Models (LLMs) showcase impressive capabilities but encounter challenges like hallucination, outdated knowledge, and non-transparent, untraceable reasoning processes. Retrieval-Augmented Generation (RAG) has emerged as a promising solution byarxiv.org0. AbstractLLM(Large Language Model)은 뛰어난 성과를 보이지만, hallucination, .. 2024. 11. 11.
(RAG) Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks Retrieval-Augmented Generation for Knowledge-Intensive NLP TasksLarge pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limarxiv.org0. AbstractPretrained LLM은 사실의 지식을 매개변수에 저장하고, downstream NLP 작업에서 미.. 2024. 11. 3.
LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS LoRA: Low-Rank Adaptation of Large Language Models An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes le arxiv.org 0. Abstrct 대규모 모델을 사전 학습할수록 모든 모델 파라미터를 재학습하는 전체 미세 조정은 실현 가능성이 낮아진다. 사전 학습된 모델 .. 2024. 2. 28.