Learning from human preference is a paradigm used in large-scale language model (LLM) fine-tuning step to better align pretrained LLM to human preference for downstream task. In the past it uses Reinforcement Learning from human feedback (RLHF) algorithm to optimize the LLM policy to a