Loki: Open-source solution designed to automate the process of verifying...
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language M...
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and R...
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robus...
RefChecker provides automatic checking pipeline and benchmark dataset fo...
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You ...
Benchmarking the Hallucination of Chinese Large Language Models via Unco...
TruthX: Alleviating Hallucinations by Editing Large Language Models in T...
Code & Data for our Paper "Alleviating Hallucinations of Large Language ...
Framework for hallucination detection and correction in LLMs