NLP10 Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent CollaborationWeikang Yuan, Junjie Cao, Zhuoren Jiang, Yangyang Kang, Jun Lin, Kaisong Song, Tianqianjin Lin, Pengwei Yan, Changlong Sun, Xiaozhong Liu. Findings of the Association for Computational Linguistics: EMNLP 2024. 2024.aclanthology.orgMotivationsLegal 분야에서는 LLMs를 이용해서 법 이론을 충분히 이해하고 복잡.. 2025. 3. 7. A-MEM: Agentic Memory for LLM Agents A-MEM: Agentic Memory for LLM AgentsWhile large language model (LLM) agents can effectively use external tools for complex real-world tasks, they require memory systems to leverage historical experiences. Current memory systems enable basic storage and retrieval but lack sophisticated memoryarxiv.org1. IntroductionLLM agent의 발전으로, 환경과 상호작용하고 작업을 실행하며 의사결정을 할 수 있게됨Reasoning과 planning 능력을 향상시키기 위해.. 2025. 3. 5. LegalAgentBench: Evaluating LLM Agents in Legal Domain LegalAgentBench: Evaluating LLM Agents in Legal DomainWith the increasing intelligence and autonomy of LLM agents, their potential applications in the legal domain are becoming increasingly apparent. However, existing general-domain benchmarks cannot fully capture the complexity and subtle nuances of real-worarxiv.org1. IntroductionLLM의 발전으로 법률 전문가들이 법률 연구, 계약서 작성, 판례 분석과 같은 업무를 더욱 효율적으로 처리할 수 있.. 2025. 3. 5. Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference ModelsIn this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a largearxiv.org1. Methods전체 dataset 중에서 일부 data를 사용하여, perplexity를.. 2025. 3. 5. 이전 1 2 3 다음