By kallerna - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=122952945

ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations

In the communications surrounding LLMs and popular interfaces like ChatGPT the term ‘hallucination’ is often used to reference false statements made in the output of these models. This infers that there is some coherency and an attempt by the LLM to be both cognizant of the truth, while also suffering moments of (mild) insanity. The LLM thus effectively is treated like a young child or a person suffering from disorders like Alzheimer’s, giving it agency in the process. That this is utter nonsense and patently incorrect is the subject of a treatise by [Michael Townsen Hicks] and colleagues, as published in Ethics and Information Technology.

Much of the distinction lies in the difference between a lie and bullshit, as so eloquently described in [Harry G. Frankfurt]’s 1986 essay and 2005 book On Bullshit. Whereas a lie is intended to deceive and cover up the truth, bullshitting is done with no regard for, or connection with, the truth. The bullshitting is only intended to serve the immediate situation, reminiscent of the worst of sound bite culture.

Continue reading “ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations”

Air Canada’s Chatbot: Why RAG Is Better Than An LLM For Facts

Recently Air Canada was in the news regarding the outcome of Moffatt v. Air Canada, in which Air Canada was forced to pay restitution to Mr. Moffatt after the latter had been disadvantaged by advice given by a chatbot on the Air Canada website regarding the latter’s bereavement fare policy. When Mr. Moffatt inquired whether he could apply for the bereavement fare after returning from the flight, the chatbot said that this was the case, even though the link which it provided to the official bereavement policy page said otherwise.

This latter aspect of the case is by far the most interesting aspect of this case, as it raises many questions about the technical details of this chatbot which Air Canada had deployed on its website. Since the basic idea behind such a chatbot is that it uses a curated source of (company) documentation and policies, the assumption made by many is that this particular chatbot instead used an LLM with more generic information in it, possibly sourced from many other public-facing policy pages.

Whatever the case may be, chatbots are increasingly used by companies, but instead of pure LLMs they use what is called RAG: retrieval augmented generation. This bypasses the language model and instead fetches factual information from a vetted source of documentation.

Continue reading “Air Canada’s Chatbot: Why RAG Is Better Than An LLM For Facts”