How to make LLMs get their facts right
Large language models (LLM) have seen significant advances in recent years, generating text with quality that was previously unimaginable. But LLMs also suffer from a serious problem: albeit human-like and fluid, the text they generate can be factually wrong.
This challenge, sometimes called the “hallucination” problem, can be amusing when people tweet about LLMs making egregiously false statements. But it makes it very difficult to use LLMs in real-world applications.
AI21 Labs is among the organizations that are trying to address this problem by creating language models that are reliable for various applications. In an interview with TechTalks, Yoav Levine, the company's chief scientist, explained why LLMs struggle with factuality and how his research team is working on creating language models that can ground their text in real facts.
Read the full interview here.
For more interviews: