<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>IA on CTOMultiplier</title><link>https://ctomultiplier.com/tags/ia/</link><description>Recent content in IA on CTOMultiplier</description><generator>Hugo</generator><language>en</language><lastBuildDate>Fri, 26 Sep 2025 12:33:04 +0200</lastBuildDate><atom:link href="https://ctomultiplier.com/tags/ia/feed.xml" rel="self" type="application/rss+xml"/><item><title>Why do ChatBots hallucinate?</title><link>https://ctomultiplier.com/why-do-chatbots-hallucinate/</link><pubDate>Thu, 05 Oct 2023 10:28:23 +0000</pubDate><guid>https://ctomultiplier.com/why-do-chatbots-hallucinate/</guid><description>&lt;p&gt;Those of you who have used ChatGPT, Google Bard or similar, have probably found that sometimes these chatbots make up the answers to our questions. This is what is commonly known as hallucinations.&lt;/p&gt;
&lt;p&gt;To understand why they happen, the first thing is to understand at a very basic level how these chatbots work. The fundamental building block is the language models (LLMs) &lt;em&gt;large language models&lt;/em&gt;). These models are trained on large amounts of data, such as web pages on the internet and books in the public domain, among others. The task of LLMs is to try to predict the next word or sequence of words from a text that the user enters. For example, if we ask a question, the model predicts the words right after that question. As the model has been trained on millions of documents, it is likely that in one (or many) of those documents it has seen a similar question, along with the answer. Roughly speaking, the LLM works like a statistical model, first during its training it learns the probability that two or more words go together, and then during its use it uses this probability to predict the next sequence of words;&lt;/p&gt;</description></item><item><title>AI is not the product</title><link>https://ctomultiplier.com/ai-is-not-the-product/</link><pubDate>Fri, 14 Jul 2023 17:20:07 +0000</pubDate><guid>https://ctomultiplier.com/ai-is-not-the-product/</guid><description>&lt;p&gt;Lately I have spoken to several founders who have told me that people have been asking them to do things with AI for their products/services. Clearly there is a hype, and a lot of attention is being focused around AI. In addition, there seems to be a wave of powerful investments in startups with AI in their name, which feeds the hype.&lt;/p&gt;
&lt;p&gt;If I can give my two cents on the subject, based on my experience working in a startup that used AI to build products, I will say:&lt;/p&gt;</description></item><item><title>A short reflection on AI and low cost</title><link>https://ctomultiplier.com/a-short-reflection-on-ai-and-low-cost/</link><pubDate>Wed, 03 May 2023 16:32:00 +0000</pubDate><guid>https://ctomultiplier.com/a-short-reflection-on-ai-and-low-cost/</guid><description>&lt;p&gt;It seems that one of the debates surrounding AI centres on whether this technology will eliminate a large number of jobs by matching or surpassing human capabilities. Although in almost no discipline today does AI match a human expert, there are already thousands of products that integrate AI, and the number is growing daily. This is a phenomenon that has exploded in the last year with the popularisation of generative AI models, and especially with the emergence of ChatGPT, the brainchild of OpenAI.&lt;/p&gt;</description></item></channel></rss>