논문 링크: KTO: Model Alignment as Prospect Theoretic Optimization깃허브 링크: KTO: Model Alignment as Prospect Theoretic Optimization허깅페이스 링크: Archangel - a ContextualAI Collection Archangel - a ContextualAI CollectionArchangel is a suite of human feedback-aligned LLMs, released as part of the Human-Aware Loss Functions (HALOs) project by Ethayarajh et al. (2024).huggingface.coAbstractKahneman과 Tversky의..