If it can accurately output a probability of hallucination, then you can tune the LLM for more or less hallucination, so you can probably improve it and users can choose based on the use case. But the fundamental problem revealed by the research is people preferred more certainty than is actually available. People demand bullshit economic point forecasts and get anxious when smarter people talk probabilities and decision science terms.

Anyway for academic research and retrieval you want to ground it and combine LLM, BM25, knowledge graphs and GraphRAG, with links to sources.

Fair point, and he makes me wonder what a programming language and toolchain truly optimized for AI would look like. 35 is a geezer when you are a prodigy I guess.

Follow the latest AI headlines via SkynetAndChill.com on Bluesky

Keep Reading