Llm Hallucination Survey Save

Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"

Project README

llm-hallucination-survey

Version Stars Issues

Hallucination refers to the generated content that while seemingly plausible, deviates from user input (input-conflicting), previously generated context (context-conflicting), or factual knowledge (fact-conflicting).

LLM evaluation
This issue significantly undermines the reliability of LLMs in real-world scenarios.

📰News

😎 We have uploaded a comprehensive survey about the hallucination issue within the context of large language models, which discussed the evaluation, explanation, and mitigation. Check it out!

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

If you think our survey is helpful, please kindly cite our paper:

@article{zhang2023hallucination,
      title={Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models}, 
      author={Zhang, Yue and Li, Yafu and Cui, Leyang and Cai, Deng and Liu, Lemao and Fu, Tingchen and Huang, Xinting and Zhao, Enbo and Zhang, Yu and Chen, Yulong and Wang, Longyue and Luu, Anh Tuan and Bi, Wei and Shi, Freda and Shi, Shuming},
      journal={arXiv preprint arXiv:2309.01219},
      year={2023}
}

🚀Table of Content

🔍Evaluation of LLM Hallucination

Input-conflicting Hallucination

This kind of hallucination denotes the model response deviates from the user input, including task instruction and task input. This kind of hallucination has been widely studied in some traditional NLG tasks, such as:

  • Machine Translation:

    • Hallucinations in Neural Machine TranslationDownload Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, David Sussillo [paper] 2018.9
    • Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation Nuno M. Guerreiro, Elena Voita, André F.T. Martins [paper] 2022.8
    • Detecting and Mitigating Hallucinations in Machine Translation: Model Internal Workings Alone Do Well, Sentence Similarity Even Better David Dale, Elena Voita, Loïc Barrault, Marta R. Costa-jussà [paper] 2022.12
    • HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation David Dale, Elena Voita, Janice Lam, Prangthip Hansanti, Christophe Ropers, Elahe Kalbassi, Cynthia Gao, Loïc Barrault, Marta R. Costa-jussà [paper] 2023.05
  • Data-to-text:

    • Controlling Hallucinations at Word Level in Data-to-Text Generation Clément Rebuffel, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere, Patrick Gallinari[paper] 2021.2
    • On Hallucination and Predictive Uncertainty in Conditional Language Generation Yijun Xiao, William Yang Wang[paper] 2021.3
    • Faithful Low-Resource Data-to-Text Generation through Cycle Training Zhuoer Wang, Marcus Collins, Nikhita Vedula, Simone Filice, Shervin Malmasi, Oleg Rokhlenko[paper] 2023.7
  • Summarization:

    • On Faithfulness and Factuality in Abstractive Summarization Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan McDonald[paper] 2020.5
    • Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization Meng Cao, Yue Dong, Jackie Chi Kit Cheung[paper] 2021.9
    • Summarization is (Almost) Dead Xiao Pu, Mingqi Gao, Xiaojun Wan[paper] 2023.9
    • Hallucination Reduction in Long Input Text Summarization Tohida Rehman, Ronit Mandal, Abhishek Agarwal, Debarshi Kumar Sanyal[paper] 2023.9
    • Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization George Chrysostomou, Zhixue Zhao, Miles Williams, Nikolaos Aletras[paper] 2023.11
    • TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization Liyan Tang, Igor Shalyminov, Amy Wing-mei Wong, Jon Burnsky, Jake W. Vincent, Yu'an Yang, Siffi Singh, Song Feng, Hwanjun Song, Hang Su, Lijia Sun, Yi Zhang, Saab Mansour, Kathleen McKeown[paper] 2024.02
  • Dialogue:

    • Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path Grounding Nouha Dziri, Andrea Madotto, Osmar Zaiane, Avishek Joey Bose[paper] 2021.4
    • RHO: Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding Ziwei Ji, Zihan Liu, Nayeon Lee, Tiezheng Yu, Bryan Wilie, Min Zeng, Pascale Fung[paper] 2023.7
    • DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models Kedi Chen, Qin Chen, Jie Zhou, Yishen He, Liang He[paper] 2024.3
  • Question Answering:

    • Entity-Based Knowledge Conflicts in Question Answering Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, Sameer Singh[paper] 2021.9
    • Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy [paper] 2023.7

Context-conflicting Hallucination

This kind of hallucination means the generated content exhibits self-contradiction, i.e., conflicts with previously generated content. Here are some preliminary studies in this direction:

  1. Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang[paper] 2021.9

  2. A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan[paper] 2022.5 (not only limited to context-conflicting type)

  3. Large Language Models Can Be Easily Distracted by Irrelevant Context Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Schärli, Denny Zhou[paper] 2023.2

  4. HistAlign: Improving Context Dependency in Language Generation by Aligning with History David Wan, Shiyue Zhang, Mohit Bansal[paper] 2023.5

  5. Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation Niels Mündler, Jingxuan He, Slobodan Jenko, Martin Vechev [paper] 2023.5

Fact-conflicting Hallucination

This kind of hallucination means the generated content conflicts with established facts. This kind of hallucination is challenging and important for practical applications of LLMs, so it has been widely studied in recent work.

  1. TruthfulQA: Measuring How Models Mimic Human Falsehoods Stephanie Lin, Jacob Hilton, Owain Evans [paper] 2022.5

  2. A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan [paper] 2022.5

  3. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung [paper] 2023.2

  4. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen [paper] 2023.5

  5. Automatic Evaluation of Attribution by Large Language Models Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, Huan Sun [paper] 2023.5

  6. Adaptive Chameleon or Stubborn Sloth: Unraveling the Behavior of Large Language Models in Knowledge Clashes Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, Yu Su [paper] 2023.5

  7. LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond Philippe Laban, Wojciech Kryściński, Divyansh Agarwal, Alexander R. Fabbri, Caiming Xiong, Shafiq Joty, Chien-Sheng Wu [paper] 2023.5

  8. Evaluating the Factual Consistency of Large Language Models Through News Summarization Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, Colin Raffel [paper] 2023.5

  9. Methods for Measuring, Updating, and Visualizing Factual Beliefs in Language Models Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, Srinivasan Iyer [paper] 2023.5

  10. How Language Model Hallucinations Can Snowball Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith [paper] 2023.5

  11. Evaluating Factual Consistency of Texts with Semantic Role Labeling Jing Fan, Dennis Aumiller, Michael Gertz [paper] 2023.5

  12. FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi [paper] 2023.5

  13. Measuring and Modifying Factual Knowledge in Large Language Models Pouya Pezeshkpour [paper] 2023.6

  14. KoLA: Carefully Benchmarking World Knowledge of Large Language Models Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, Chunyang Li, Zheyuan Zhang, Yushi Bai, Yantao Liu, Amy Xin, Nianyi Lin, Kaifeng Yun, Linlu Gong, Jianhui Chen, Zhili Wu, Yunjia Qi, Weikai Li, Yong Guan, Kaisheng Zeng, Ji Qi, Hailong Jin, Jinxin Liu, Yu Gu, Yuan Yao, Ning Ding, Lei Hou, Zhiyuan Liu, Bin Xu, Jie Tang, Juanzi Li [paper] 2023.6

  15. Generating Benchmarks for Factuality Evaluation of Language Models Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham [paper] 2023.7

  16. Fact-Checking of AI-Generated Reports Razi Mahmood, Ge Wang, Mannudeep Kalra, Pingkun Yan [paper] 2023.7

  17. Med-HALT: Medical Domain Hallucination Test for Large Language Models Logesh Kumar Umapathi, Ankit Pal, Malaikannan Sankarasubbu [paper] 2023.7

  18. Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP Concepts

    Fan Gao, Hang Jiang, Moritz Blum, Jinghui Lu, Yuang Jiang, Irene Li [paper] 2023.8

  19. ChatGPT Hallucinates when Attributing Answers Guido Zuccon, Bevan Koopman, Razia Shaik [paper] 2023.9

  20. BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen [paper] 2023.9

  21. KLoB: a Benchmark for Assessing Knowledge Locating Methods in Language Models Yiming Ju, Zheng Zhang [paper] 2023.9

  22. AutoHall: Automated Hallucination Dataset Generation for Large Language Models Zouying Cao, Yifei Yang, Hai Zhao [paper] 2023.10

  23. FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong [paper] 2023.10

  24. Evaluating Hallucinations in Chinese Large Language Models Qinyuan Cheng, Tianxiang Sun, Wenwei Zhang, Siyin Wang, Xiangyang Liu, Mozhi Zhang, Junliang He, Mianqiu Huang, Zhangyue Yin, Kai Chen, Xipeng Qiu [paper] 2023.10

  25. FELM: Benchmarking Factuality Evaluation of Large Language Models Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, Pengfei Liu, Junxian He [paper] 2023.10

  26. A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection Shiping Yang, Renliang Sun, Xiaojun Wan [paper] 2023.10

  27. Do Large Language Models Know about Facts? Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, Zhijiang Guo [paper] 2023.10

  28. Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong [paper] 2023.10

  29. Unveiling the Siren's Song: Towards Reliable Fact-Conflicting Hallucination Detection Xiang Chen, Duanzheng Song, Honghao Gui, Chengxi Wang, Ningyu Zhang, Fei Huang, Chengfei Lv, Dan Zhang, Huajun Chen [paper] 2023.10

  30. Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models Jirui Qi, Raquel Fernández, Arianna Bisazza [paper] 2023.10

  31. Automatic Hallucination Assessment for Aligned Large Language Models via Transferable Adversarial Attacks Xiaodong Yu, Hao Cheng, Xiaodong Liu, Dan Roth, Jianfeng Gao [paper] 2023.10

  32. Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI Muhammad Aurangzeb Ahmad, Ilker Yaramis, Taposh Dutta Roy [paper] 2023.11

  33. How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun [paper] 2023.11

  34. Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination Haoqiang Kang, Xiao-Yang Liu [paper] 2023.11

  35. UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng [paper] 2023.11

  36. DelucionQA: Detecting Hallucinations in Domain-specific Question Answering Mobashir Sadat, Zhengyu Zhou, Lukas Lange, Jun Araki, Arsalan Gundroo, Bingqing Wang, Rakesh R Menon, Md Rizwan Parvez, Zhe Feng [paper] 2023.12

  37. Are Large Language Models Good Fact Checkers: A Preliminary Study Han Cao, Lingwei Wei, Mengyang Chen, Wei Zhou, Songlin Hu [paper] 2023.11

  38. RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models Yuanhao Wu, Juno Zhu, Siliang Xu, Kashun Shum, Cheng Niu, Randy Zhong, Juntong Song, Tong Zhang [paper] 2024.01

  39. Measuring and Reducing LLM Hallucination without Gold-Standard Answers via Expertise-Weighting Jiaheng Wei, Yuanshun Yao, Jean-Francois Ton, Hongyi Guo, Andrew Estornell, Yang Liu [paper] 2024.02

  40. Multi-FAct: Assessing Multilingual LLMs' Multi-Regional Knowledge using FActScore Sheikh Shafayat, Eunsu Kim, Juhyun Oh, Alice Oh [paper] 2024.02

  41. Comparing Hallucination Detection Metrics for Multilingual Generation Haoqiang Kang, Terra Blevins, Luke Zettlemoyer [paper] 2024.02

  42. In Search of Truth: An Interrogation Approach to Hallucination Detection Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein [paper] 2024.03

  43. HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild Zhiying Zhu, Zhiqing Sun, Yiming Yang [paper] 2024.03

  44. Benchmarking Hallucination in Large Language Models based on Unanswerable Math Word Problem Yuhong Sun, Zhangyue Yin, Qipeng Guo, Jiawen Wu, Xipeng Qiu, Hui Zhao [paper] 2024.03

  45. DEE: Dual-stage Explainable Evaluation Method for Text Generation Shenyu Zhang, Yu Li, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi [paper] 2024.03

🚨Source of LLM Hallucination

There is also a line of works that try to explain the hallucination with LLMs.

  1. How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu [paper] 2022.3

  2. On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, Siva Reddy [paper] 2022.4

  3. Towards Tracing Factual Knowledge in Language Models Back to the Training Data Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu [paper] 2022.5

  4. Language Models (Mostly) Know What They Know Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, Jared Kaplan [paper] 2022.7

  5. Discovering Language Model Behaviors with Model-Written Evaluations Ethan Perez, Sam Ringer, Kamilė Lukošiūtė, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, Andy Jones, Anna Chen, Ben Mann, Brian Israel, Bryan Seethor, Cameron McKinnon, Christopher Olah, Da Yan, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Guro Khundadze, Jackson Kernion, James Landis, Jamie Kerr, Jared Mueller, Jeeyoon Hyun, Joshua Landau, Kamal Ndousse, Landon Goldberg, Liane Lovitt, Martin Lucas, Michael Sellitto, Miranda Zhang, Neerav Kingsland, Nelson Elhage, Nicholas Joseph, Noemí Mercado, Nova DasSarma, Oliver Rausch, Robin Larson, Sam McCandlish, Scott Johnston, Shauna Kravec, Sheer El Showk, Tamera Lanham, Timothy Telleen-Lawton, Tom Brown, Tom Henighan, Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Jack Clark, Samuel R. Bowman, Amanda Askell, Roger Grosse, Danny Hernandez, Deep Ganguli, Evan Hubinger, Nicholas Schiefer, Jared Kaplan [paper] 2022.12

  6. Why Does ChatGPT Fall Short in Providing Truthful Answers? Shen Zheng, Jie Huang, Kevin Chen-Chuan Chang [paper] 2023.4

  7. Do Large Language Models Know What They Don't Know? Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, Xuanjing Huang [paper] 2023.5

  8. Sources of Hallucination by Large Language Models on Inference Tasks

    Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman [paper] 2023.5

  9. Enabling Large Language Models to Generate Text with Citations Tianyu Gao, Howard Yen, Jiatong Yu, Danqi Chen [paper] 2023.5

  10. Overthinking the Truth: Understanding how Language Models Process False Demonstrations Danny Halawi, Jean-Stanislas Denain, Jacob Steinhardt [paper] 2023.7

  11. Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang [paper] 2023.7

  12. Head-to-Tail: How Knowledgeable are Large Language Models (LLM)? A.K.A. Will LLMs Replace Knowledge Graphs? Kai Sun, Yifan Ethan Xu, Hanwen Zha, Yue Liu, Xin Luna Dong [paper] 2023.8

  13. Simple synthetic data reduces sycophancy in large language models Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, Quoc V. Le [paper] 2023.8

  14. Do PLMs Know and Understand Ontological Knowledge? Weiqi Wu, Chengyue Jiang, Yong Jiang, Pengjun Xie, Kewei Tu [paper] 2023.9

  15. Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness Vipula Rawte, Prachi Priya, S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Amit Sheth, Amitava Das [paper] 2023.9

  16. LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Li Yuan [paper] 2023.10

  17. Factuality Challenges in the Era of Large Language Models Isabelle Augenstein, Timothy Baldwin, Meeyoung Cha, Tanmoy Chakraborty, Giovanni Luca Ciampaglia, David Corney, Renee DiResta, Emilio Ferrara, Scott Hale, Alon Halevy, Eduard Hovy, Heng Ji, Filippo Menczer, Ruben Miguez, Preslav Nakov, Dietram Scheufele, Shivam Sharma, Giovanni Zagni [paper] 2023.10

  18. The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations Vipula Rawte, Swagata Chakraborty, Agnibh Pathak, Anubhav Sarkar, S.M Towhidul Islam Tonmoy, Aman Chadha, Amit P. Sheth, Amitava Das [paper] 2023.10

  19. The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets Samuel Marks, Max Tegmark [paper] 2023.10

  20. Representation Engineering: A Top-Down Approach to AI Transparency Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan Hendrycks [paper] 2023.10

  21. Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong [paper] 2023.10

  22. Language Models Hallucinate, but May Excel at Fact Verification Jian Guan, Jesse Dodge, David Wadden, Minlie Huang, Hao Peng [paper] 2023.10

  23. Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, Jordan Boyd-Graber [paper] 2023.10

  24. Insights into Classifying and Mitigating LLMs' Hallucinations Alessandro Bruno, Pier Luigi Mazzeo, Aladine Chetouani, Marouane Tliba, Mohamed Amine Kerkouri [paper] 2023.11

  25. Deceiving Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination? Bangzheng Li, Ben Zhou, Fei Wang, Xingyu Fu, Dan Roth, Muhao Chen [paper] 2023.11

  26. Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng [paper] 2023.11

  27. Calibrated Language Models Must Hallucinate Adam Tauman Kalai, Santosh S. Vempala [paper] 2023.11

  28. Beyond Surface: Probing LLaMA Across Scales and Layers Nuo Chen, Ning Wu, Shining Liang, Ming Gong, Linjun Shou, Dongmei Zhang, Jia Li [paper] 2023.12

  29. HALO: An Ontology for Representing Hallucinations in Generative Models Navapat Nananukul, Mayank Kejriwal [paper] 2023.12

  30. Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs Oded Ovadia, Menachem Brief, Moshik Mishaeli, Oren Elisha [paper] 2023.12

  31. The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen [paper] 2024.01

  32. Hallucination is Inevitable: An Innate Limitation of Large Language Models Ziwei Xu, Sanjay Jain, Mohan Kankanhalli [paper] 2024.01

  33. Mechanisms of non-factual hallucinations in language models Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng [paper] 2024.04

  34. Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge Editing Benchmark Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Xueqi Cheng [paper] 2024.04

🛠Mitigation of LLM Hallucination

Numerous recent work tries to mitigate hallucination in LLMs. These methods can be applied at different stages of LLM life cycle.

Mitigation During Pretraining

One main mitigation method during pretraining is (automatically) curating training data. Here are some papers using this method:

  1. Factuality Enhanced Language Models for Open-Ended Text Generation Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro [paper] 2022.6
  2. The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay [paper] 2023.7
  3. Llama 2: Open Foundation and Fine-Tuned Chat Models Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom [paper] 2023.7
  4. Textbooks Are All You Need II: phi-1.5 technical report Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, Yin Tat Lee [paper] 2023.9

Mitigation During SFT

Mitigating hallucination during SFT can involve curating SFT data, such as:

  1. LIMA: Less Is More for Alignment Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy [paper] 2023.5
  2. AlpaGasus: Training A Better Alpaca with Fewer Data Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin [paper] 2023.7
  3. Instruction Mining: High-Quality Instruction Data Selection for Large Language Models Yihan Cao, Yanbin Kang, Lichao Sun [paper] 2023.7
  4. Halo: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models Mohamed Elaraby, Mengyin Lu, Jacob Dunn, Xueying Zhang, Yu Wang, Shizhu Liu [paper] 2023.8
  5. Specialist or Generalist? Instruction Tuning for Specific NLP Tasks Chufan Shi, Yixuan Su, Cheng Yang, Yujiu Yang, Deng Cai [paper] 2023.10
  6. Fine-tuning Language Models for Factuality Katherine Tian, Eric Mitchell, Huaxiu Yao, Christopher D. Manning, Chelsea Finn [paper] 2023.11
  7. R-Tuning: Teaching Large Language Models to Refuse Unknown Questions Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang [paper] 2023.11
  8. Dial BeInfo for Faithfulness: Improving Factuality of Information-Seeking Dialogue via Behavioural Fine-Tuning Evgeniia Razumovskaia, Ivan Vulić, Pavle Marković, Tomasz Cichy, Qian Zheng, Tsung-Hsien Wen, Paweł Budzianowski [paper] 2023.11
  9. Supervised Knowledge Makes Large Language Models Better In-context Learners Linyi Yang, Shuibai Zhang, Zhuohao Yu, Guangsheng Bao, Yidong Wang, Jindong Wang, Ruochen Xu, Wei Ye, Xing Xie, Weizhu Chen, Yue Zhang [paper] 2023.12
  10. Alignment for Honesty Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neubig, Pengfei Liu [paper] 2023.12
  11. Mitigating Hallucinations of Large Language Models via Knowledge Consistent Alignment Fanqi Wan, Xinting Huang, Leyang Cui, Xiaojun Quan, Wei Bi, Shuming Shi [paper] 2024.01
  12. Gotcha! Don't trick me with unanswerable questions! Self-aligning Large Language Models for Responding to Unknown Questions Yang Deng, Yong Zhao, Moxin Li, See-Kiong Ng, Tat-Seng Chua [paper] 2024.02

Some researchers claim that the behavior cloning phenomenon in SFT can induce hallucinations. So some works try to mitigate hallucinations via honesty-oriented SFT.

  1. MOSS: Training Conversational Language Models from Synthetic Data Tianxiang Sun and Xiaotian Zhang and Zhengfu He and Peng Li and Qinyuan Cheng and Hang Yan and Xiangyang Liu and Yunfan Shao and Qiong Tang and Xingjian Zhao and Ke Chen and Yining Zheng and Zhejian Zhou and Ruixiao Li and Jun Zhan and Yunhua Zhou and Linyang Li and Xiaogui Yang and Lingling Wu and Zhangyue Yin and Xuanjing Huang and Xipeng Qiu [repo] 2023

An interesting new work proposed tuning LLMs on some synthetic tasks, which they found can also reduce hallucinations.

  1. Teaching Language Models to Hallucinate Less with Synthetic Tasks Erik Jones, Hamid Palangi, Clarisse Simões, Varun Chandrasekaran, Subhabrata Mukherjee, Arindam Mitra, Ahmed Awadallah, Ece Kamar [paper] 2023.10

Recent work suggests that hallucinations can also be mitigated by leveraging unlabeled/unpaired data with cycle training:

  1. Faithful Low-Resource Data-to-Text Generation through Cycle Training Zhuoer Wang, Marcus Collins, Nikhita Vedula, Simone Filice, Shervin Malmasi, Oleg Rokhlenko[paper] 2023.7

Mitigation During RLHF

  1. Training language models to follow instructions with human feedback Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe [paper] 2022.3
  2. GPT-4 Technical Report OpenAI [paper] 2023.3
  3. Let's Verify Step by Step Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe [paper] 2023.5
  4. Reinforcement learning from human feedback: Progress and challenges John Schulman [talk] 2023.5
  5. Fine-Grained Human Feedback Gives Better Rewards for Language Model Training Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, Hannaneh Hajishirzi [paper] 2023.6
  6. Aligning Large Multimodal Models with Factually Augmented RLHF Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell [paper] 2023.9
  7. Human Feedback is not Gold Standard Tom Hosking, Phil Blunsom, Max Bartolo [paper] 2023.9
  8. Tool-Augmented Reward Modeling Lei Li, Yekun Chai, Shuohuan Wang, Yu Sun, Hao Tian, Ningyu Zhang, Hua Wu [paper] 2023.10

Mitigation During Inference

Designing Decode Strategy

  1. Factuality Enhanced Language Models for Open-Ended Text Generation Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro [paper] 2022.6

  2. When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, Hannaneh Hajishirzi [paper] 2022.10

  3. Trusting Your Evidence: Hallucinate Less with Context-aware Decoding Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih [paper] 2023.5

  4. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg [paper] 2023.6

  5. DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, Pengcheng He [paper] 2023.9

  6. Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding Rico Sennrich, Jannis Vamvas, Alireza Mohammadshahi [paper] 2023.9

  7. Chain-of-Verification Reduces Hallucination in Large Language Models Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, Jason Weston [paper] 2023.9

  8. KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination Detection Sehyun Choi, Tianqing Fang, Zhaowei Wang, Yangqiu Song [paper] 2023.10

  9. Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation Wei-Lin Chen, Cheng-Kuang Wu, Hsin-Hsi Chen, Chung-Chi Chen [paper] 2023.10

  10. An Emulator for Fine-Tuning Large Language Models using Small Language Models Eric Mitchell, Rafael Rafailov, Archit Sharma, Chelsea Finn, Christopher D. Manning [paper] 2023.10

  11. Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation Mateusz Lango, Ondřej Dušek [paper] 2023.10

  12. Correction with Backtracking Reduces Hallucination in Summarization Zhenzhen Liu, Chao Wan, Varsha Kishore, Jin Peng Zhou, Minmin Chen, Kilian Q. Weinberger [paper] 2023.11

  13. Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, Dong Yu [paper] 2023.11

  14. Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models Lifu Tu, Semih Yavuz, Jin Qu, Jiacheng Xu, Rui Meng, Caiming Xiong, Yingbo Zhou [paper] 2023.12

  15. Context-aware Decoding Reduces Hallucination in Query-focused Summarization Zhichao Xu [paper] 2023.12

  16. Alleviating Hallucinations of Large Language Models through Induced Hallucinations Yue Zhang, Leyang Cui, Wei Bi, Shuming Shi [paper] 2023.12

  17. SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully Jushi Kai, Tianhang Zhang, Hai Hu, Zhouhan Lin [paper] 2024.01

  18. TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space Shaolei Zhang, Tian Yu, Yang Feng [paper] 2024.02

  19. In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation Shiqi Chen, Miao Xiong, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He [paper] 2024.03

  20. Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models Zhenyu Pan, Haozheng Luo, Manling Li, Han Liu [paper] 2024.03

Resorting to External Knowledge

  1. RARR: Researching and Revising What Language Models Say, Using Language Models Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, Kelvin Guu [paper] 2022.10

  2. Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, Jianfeng Gao [paper] 2023.2

  3. Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning Nashid Noor, Mifta Santaha, Ali Mesbah [paper] 2023.04

  4. GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information Qiao Jin, Yifan Yang, Qingyu Chen, Zhiyong Lu [paper] 2023.4

  5. Zero-shot Faithful Factual Error Correction Kung-Hsiang Huang, Hou Pong Chan, Heng Ji [paper] 2023.5

  6. CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, Weizhu Chen [paper] 2023.5

  7. PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, Kelvin Guu [paper] 2023.5

  8. Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing [paper] 2023.5

  9. Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
    Miaoran Li, Baolin Peng, Zhu Zhang [paper] 2023.5

  10. Augmented Large Language Models with Parametric Knowledge Guiding Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang [paper] 2023.5

  11. WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, Monica S. Lam [paper] [repo] 2023.5

  12. FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu [paper] 2023.7

  13. Knowledge Solver: Teaching LLMs to Search for Domain Knowledge from Knowledge Graphs Chao Feng, Xinyu Zhang, Zichu Fei [paper] 2023.9

  14. "Merge Conflicts!" Exploring the Impacts of External Distractors to Parametric Knowledge Graphs Cheng Qian, Xinran Zhao, Sherry Tongshuang Wu [paper] 2023.9

  15. BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models Qingqing Cao, Sewon Min, Yizhong Wang, Hannaneh Hajishirzi [paper] 2023.10

  16. FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong [paper] 2023.10

  17. FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge Farima Fatahi Bayat, Kun Qian, Benjamin Han, Yisi Sang, Anton Belyi, Samira Khorshidi, Fei Wu, Ihab F. Ilyas, Yunyao Li [paper] 2023.10

  18. Evaluating the Effectiveness of Retrieval-Augmented Large Language Models in Scientific Document Reasoning Sai Munikoti, Anurag Acharya, Sridevi Wagle, Sameera Horawalavithana [paper] 2023.11

  19. Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism Lang Cao [paper] 2023.11

  20. Learning to Filter Context for Retrieval-Augmented Generation Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, Graham Neubig [paper] 2023.11

  21. KTRL+F: Knowledge-Augmented In-Document Search Hanseok Oh, Haebin Shin, Miyoung Ko, Hyunji Lee, Minjoon Seo [paper] 2023.11

  22. Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting Xinyan Guan, Yanjiang Liu, Hongyu Lin, Yaojie Lu, Ben He, Xianpei Han, Le Sun [paper] 2023.11

  23. Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification Haoqiang Kang, Juntong Ni, Huaxiu Yao [paper] 2023.11

  24. Minimizing Factual Inconsistency and Hallucination in Large Language Models Muneeswaran I, Shreya Saxena, Siva Prasad, M V Sai Prakash, Advaith Shankar, Varun V, Vishal Vaddina, Saisubramaniam Gopalakrishnan [paper] 2023.11

  25. Seven Failure Points When Engineering a Retrieval Augmented Generation System Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek [paper] 2024.01

  26. When Do LLMs Need Retrieval Augmentation? Mitigating LLMs' Overconfidence Helps Retrieval Augmentation Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng [paper] 2024.02

  27. Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models Hanxing Ding, Liang Pang, Zihao Wei, Huawei Shen, Xueqi Cheng [paper] 2024.02

  28. RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation Zihao Wang, Anji Liu, Haowei Lin, Jiaqi Li, Xiaojian Ma, Yitao Liang [paper] 2024.03

  29. Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts Tian Yu, Shaolei Zhang, Yang Feng [paper] 2024.03

  30. FACTOID: FACtual enTailment fOr hallucInation Detection Vipula Rawte, S.M Towhidul Islam Tonmoy, Krishnav Rajbangshi, Shravani Nag, Aman Chadha, Amit P. Sheth, Amitava Das [paper] 2024.03

  31. Rejection Improves Reliability: Training LLMs to Refuse Unknown Questions Using RL from Knowledge Feedback Hongshen Xu, Zichen Zhu, Da Ma, Situo Zhang, Shuai Fan, Lu Chen, Kai Yu [paper] 2024.03

Exploiting Uncertainty

  1. SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models Potsawee Manakul, Adian Liusie, Mark J. F. Gales [paper] 2023.3

  2. Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation Niels Mündler, Jingxuan He, Slobodan Jenko, Martin Vechev [paper] 2023.5

  3. Do Language Models Know When They're Hallucinating References? Ayush Agrawal, Lester Mackey, Adam Tauman Kalai [paper] 2023.5

  4. LLM Calibration and Automatic Hallucination Detection via Pareto Optimal Self-supervision Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon [paper] 2023.6

  5. A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, Dong Yu [paper] 2023.7

  6. Zero-Resource Hallucination Prevention for Large Language Models Junyu Luo, Cao Xiao, Fenglong Ma [paper] 2023.9

  7. Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi [paper] 2023.9

  8. Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning Yuchen Yang, Houqiang Li, Yanfeng Wang, Yu Wang [paper] 2023.10

  9. N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics Sajad Mousavi, Ricardo Luna Gutiérrez, Desik Rengarajan, Vineet Gundecha, Ashwin Ramesh Babu, Avisek Naug, Antonio Guillen, Soumyendu Sarkar [paper] 2023.10

  10. Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, Hannaneh Hajishirzi [paper] 2023.10

  11. SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency Jiaxin Zhang, Zhuohang Li, Kamalika Das, Bradley A. Malin, Sricharan Kumar [paper] 2023.11

  12. LM-Polygraph: Uncertainty Estimation for Language Models Ekaterina Fadeeva, Roman Vashurin, Akim Tsvigun, Artem Vazhentsev, Sergey Petrakov, Kirill Fedyanin, Daniil Vasilev, Elizaveta Goncharova, Alexander Panchenko, Maxim Panov, Timothy Baldwin, Artem Shelmanov [paper] 2023.11

  13. Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus Tianhang Zhang, Lin Qiu, Qipeng Guo, Cheng Deng, Yue Zhang, Zheng Zhang, Chenghu Zhou, Xinbing Wang, Luoyi Fu [paper] 2023.11

  14. RELIC: Investigating Large Language Model Responses using Self-Consistency Furui Cheng, Vilém Zouhar, Simran Arora, Mrinmaya Sachan, Hendrik Strobelt, Mennatallah El-Assady [paper] 2023.11

  15. Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification Ekaterina Fadeeva, Aleksandr Rubashevskii, Artem Shelmanov, Sergey Petrakov, Haonan Li, Hamdy Mubarak, Evgenii Tsymbalov, Gleb Kuzmin, Alexander Panchenko, Timothy Baldwin, Preslav Nakov, Maxim Panov [paper] 2024.03

Multi-agent Interaction

  1. Improving Factuality and Reasoning in Language Models through Multiagent Debate Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch [paper] 2023.5

  2. LM vs LM: Detecting Factual Errors via Cross Examination Roi Cohen, May Hamri, Mor Geva, Amir Globerson [paper] 2023.5

  3. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji [paper] 2023.7

  4. Theory of Mind for Multi-Agent Collaboration via Large Language Models Huao Li, Yu Quan Chong, Simon Stepputtis, Joseph Campbell, Dana Hughes, Michael Lewis, Katia Sycara [paper] 2023.10

  5. N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics Sajad Mousavi, Ricardo Luna Gutiérrez, Desik Rengarajan, Vineet Gundecha, Ashwin Ramesh Babu, Avisek Naug, Antonio Guillen, Soumyendu Sarkar [paper] 2023.10

  6. Red Teaming for Large Language Models At Scale: Tackling Hallucinations on Mathematics Tasks Aleksander Buszydlik, Karol Dobiczek, Michał Teodor Okoń, Konrad Skublicki, Philip Lippmann, Jie Yang [paper] 2024.01

Human-in-the-loop

  1. Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment Shuo Zhang, Liangming Pan, Junzhou Zhao, William Yang Wang [paper] 2023.5

  2. Automatic and Human-AI Interactive Text Generation Yao Dou, Philippe Laban, Claire Gardent, Wei Xu [paper] 2023.10

Analyzing Internal Model States

  1. The Internal State of an LLM Knows When its Lying Amos Azaria, Tom Mitchell [paper] 2023.4

  2. Do Language Models Know When They're Hallucinating References? Ayush Agrawal, Lester Mackey, Adam Tauman Kalai [paper] 2023.5

  3. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg [paper] 2023.6

  4. Knowledge Sanitization of Large Language Models Yoichi Ishibashi, Hidetoshi Shimodaira [paper] 2023.9

  5. Representation Engineering: A Top-Down Approach to AI Transparency Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan Hendrycks [paper] 2023.10

  6. Weakly Supervised Detection of Hallucinations in LLM Activations Miriam Rateike, Celia Cintas, John Wamburu, Tanya Akumu, Skyler Speakman [paper] 2023.12

  7. The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models Aviv Slobodkin, Omer Goldman, Avi Caciularu, Ido Dagan, Shauli Ravfogel [paper] 2023.12

  8. Do Androids Know They're Only Dreaming of Electric Sheep? Sky CH-Wang, Benjamin Van Durme, Jason Eisner, Chris Kedzie [paper] 2023.12

  9. Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning Zhongzhi Chen, Xingwu Sun, Xianfeng Jiao, Fengzong Lian, Zhanhui Kang, Di Wang, Cheng-Zhong Xu [paper] 2023.12

  10. On Early Detection of Hallucinations in Factual Question Answering Ben Snyder, Marius Moisescu, Muhammad Bilal Zafar [paper] 2023.12

  11. Reducing LLM Hallucinations using Epistemic Neural Networks Shreyas Verma, Kien Tran, Yusuf Ali, Guangyu Min [paper] 2023.12

  12. HILL: A Hallucination Identifier for Large Language Models Florian Leiser, Sven Eckhardt, Valentin Leuthe, Merlin Knaeble, Alexander Maedche, Gerhard Schwabe, Ali Sunyaev [paper] 2024.03

  13. Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models Weihang Su, Changyue Wang, Qingyao Ai, Yiran HU, Zhijing Wu, Yujia Zhou, Yiqun Liu [paper] 2024.03

  14. On Large Language Models' Hallucination with Regard to Known Facts Che Jiang, Biqing Qi, Xiangyu Hong, Dayuan Fu, Yang Cheng, Fandong Meng, Mo Yu, Bowen Zhou, Jie Zhou [paper] 2024.03

📲Contact

We warmly welcome any kinds of useful suggestions or contributions. Feel free to drop us an issue or contact Hill with this e-mail.

Open Source Agenda is not affiliated with "Llm Hallucination Survey" Project. README Source: HillZhang1999/llm-hallucination-survey
Stars
805
Open Issues
2
Last Commit
4 weeks ago

Open Source Agenda Badge

Open Source Agenda Rating