The wide acceptance of large language models (LLMs) has unlocked new
applications and social risks. Popular countermeasures aim at detecting
misinformation, usually involve domain specific models trained to recognize the
relevance of any information. Instead of evaluating the validity