<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Chatbot on CTOMultiplier</title><link>https://ctomultiplier.com/tags/chatbot/</link><description>Recent content in Chatbot on CTOMultiplier</description><generator>Hugo</generator><language>en</language><lastBuildDate>Fri, 26 Sep 2025 12:33:04 +0200</lastBuildDate><atom:link href="https://ctomultiplier.com/tags/chatbot/feed.xml" rel="self" type="application/rss+xml"/><item><title>Why do ChatBots hallucinate?</title><link>https://ctomultiplier.com/why-do-chatbots-hallucinate/</link><pubDate>Thu, 05 Oct 2023 10:28:23 +0000</pubDate><guid>https://ctomultiplier.com/why-do-chatbots-hallucinate/</guid><description>&lt;p&gt;Those of you who have used ChatGPT, Google Bard or similar, have probably found that sometimes these chatbots make up the answers to our questions. This is what is commonly known as hallucinations.&lt;/p&gt;
&lt;p&gt;To understand why they happen, the first thing is to understand at a very basic level how these chatbots work. The fundamental building block is the language models (LLMs) &lt;em&gt;large language models&lt;/em&gt;). These models are trained on large amounts of data, such as web pages on the internet and books in the public domain, among others. The task of LLMs is to try to predict the next word or sequence of words from a text that the user enters. For example, if we ask a question, the model predicts the words right after that question. As the model has been trained on millions of documents, it is likely that in one (or many) of those documents it has seen a similar question, along with the answer. Roughly speaking, the LLM works like a statistical model, first during its training it learns the probability that two or more words go together, and then during its use it uses this probability to predict the next sequence of words;&lt;/p&gt;</description></item></channel></rss>