Overcoming the limited context limitations in early-generation LLMs, Retrieval-Augmented Generation (RAG) has been a reliable solution for context-based answer generation in the past. Recently, the emergence of long-context LLMs allows the models to incorporate much longer text sequenc