Estimation of a model's confidence on its outputs is critical for Conversational AI systems based on Large Language Models (LLMs), especially for reducing hallucination and preventing over-reliance. In this work, we provide an exhaustive exploration of methods, including approaches pro