LRV Instruction Reviews Save

[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

No reviews for this project.

Add review

Open Source Agenda Badge

Open Source Agenda Rating

From the blog