Home

About

Author: Colton Hunter Briggs

AI
The intelligence illusion

  When you think of AI what do you think of? Superintelligence? The future? Sci-fi? Confusing videos on the internet? Maybe you think of the loss of your job or the end of the world. Some of the recent AI image generating software is pretty impressive. How can a computer program draw an image based on a prompt? The idea that rocks, minerals and electricity can learn how to beat the best Chess players is insane. Are we truly about to go into a period of time when we work alongside robots? Is this the future? Or is it all just one big hoax? Is AI really intelligent? Are jobs going to be taken over? Are we safe?

History

  AI has actually been around since the beginning of computers. In fact, in 1950 Marvin Minsky built the first working neural network. He experimented with a few vacuum tubes and wrote a book about his findings imagining what would be possible when computing technology was stronger.

 In 1959 a checkers playing computer program was created which learned over time by playing against itself. Doing this, it was able to avoid bad moves and choose good ones. In 1961, an AI program named SAINT was created by an American mathematician James R. Slagle. This program was able to solve calculus problems about as well as freshman college students. In 1964 a program called ELIZA was created by an American PhD student Danny Borow. This was the first ever Chat bot. This Chat bot could do almost all of what chatbots today can do. Its only limit was the information that could be stored on that computer program. The chatbots released after were: Parry released in 1972. MegaHAL in 1995. Dr Sabisto the artificial psychologist In 1992. Jabberwacky in 1997. Ultra Hal in 2000. SimSimi in 2002. Kuki in 2005. Cleverbot in 2008. Siri in 2011. Alexa in 2014. ChatGPT in 2022. Copilot in 2023. And probably many more to come. But what has actually improved over that time?

What is AI?

  An AI agent described simply is a program that recognizes relationships between data. It can then use its findings to accomplish a plethora of tasks but no matter what type of AI it is, the agent finds some type of relationship in order to accomplish its task. Before the first AI chatbots were created even back in the 1960s, the idea for a neural network was conceived. A neural network is a sort of program that would be able to find relationships between data. ChatBots now have access to a bigger database than before due to the internet. But the technical algorithm is over 65 years old. What is improving is data size and specific exception handling.

 AI is a computer program that simulates intelligence. AI by definition is not real intelligence and thus is not really intelligent. The Oxford Languages definition of artificial is “made or produced by human beings rather than occurring naturally, especially as a copy of something natural.” So AI by definition is fake intelligence. Don't be deceived into thinking Artificial intelligence can actually think.

 There are different types of AI and I will attempt to talk about all Artificial intelligence in general, however some things will not apply to certain types of AI agents. These types are: Text Based AI, Image Recognition and Creation AI, and finally task specific AI.

  Text based AI is any Agent which uses text as an input and output. These are models such as ChatGPT, Copilot and Grok. They are generally used as a way to quickly get an answer to a specific question.

 Image Based Models are AI agents that can recognize relationships between pixels. This allows them to create and recognize images. These models are not usually seen being used by themselves but instead tend to be paired with text based models. These AIs include ChatGPT-4o, Copilot and Deep AI. Image generating AI are mostly used for fun but are sometimes used by companies to cut down on costs.

 Task specific Models are made for one specific reason. They find relationships between data in order to predict a result. AI agents like these might find relationships between hospital patient symptoms, The effectiveness of advertising emails or stock market changes. These agents are best in the workforce.

  AI is something of a buzz word at this point and most people don't understand what it actually is or how it works. However with its growing relevance, I believe it is important to at least have a basic understanding of how it works and what it can actually do. In this paper I will explain how AI is marketed as more high tech than it actually is, how it is being used for different types of scams and finally the common fears people have regarding AI.

It is necessary for AI to be understood to be properly implemented.

 AI pretends to be smarter than it actually is. Understanding this and understanding why this is true will help you to not be fooled by the AI fad.

 This is a problem for employers looking to replace employees with AI agents, employees being replaced, students and teachers using AI as a resource, and anyone else who might use a chat bot for information.

 AI says things as though they are fact when they are not necessarily true. This is because AI is made to simulate intelligence and is in fact not really intelligent.

  It is important to understand how AI works to see why this is true. Don't worry, AI is not as complex as it seems. All AI agents use a pattern recognition algorithm. This is just the fancy way companies like to say that AI is a probability machine. For example the AI will know that after the word “the” there is, say, a 0.0001% chance the following word is “sky”. And if the word “sky” follows the word “the” there might be a 90% chance the following word will be “is” and so on. When you ask an AI if wearing shoes on your head is the latest style it says no. Not because it knows that wearing shoes on your head is not in style but because there are little to no articles on the internet saying this is the new style. In this case the answer is correct. However if you ask it a question like “When was the last time Veterans Day was on a Tuesday?” things get interesting. When the AI does the same thing and tries to search the internet, it can’t find anything on this and assumes that Veterans Day never fell on a Tuesday. This is the main problem with AI. AI hallucinations are part of the programing. When it gets an answer right, it is doing the same thing as when it hallucinates.

 This is an important thing to understand when using AI. It is just using the internet to create an answer and will never tell you it does not know in order to maintain a reputation of all-knowing Intelligence.

 AI never asks for clarification in order to seem more intelligent. This however makes it less functional and confidently wrong.

 We as humans need to ask for clarification all the time. In fact, it is one of the most important things you need to be productive in any form of work. In a lot of cases if we don't clarify we won't understand. So how is Artificial intelligence supposed to give reliable answers without clarification when actual Intelligence needs clarification? So why do AI chatbots not ask for clarification? Well I believe it is to make the AI seem smarter. When you ask Siri a question and Siri responds with “sorry I'm not sure I understand” it makes you question Siri’s ability. If ChatGPT were to do this it wouldn't help sell the illusion of intelligence.  If AI does not ask for clarification it makes the agent seem smarter but severely harms its ability to give correct answers.

AI is made to seem smart using psychology tricks and is not actually intelligent.

 An article by Baldur Bjarnason titled “The LLM Mentalist effect: how chat biased Large Language Models replicate the mentalism of a psychic’s con.” explains the AI illusion very well. It explains that mentalists use probability and broad terms to seem specific. They seem like they somehow know so much about you when in reality they are just using your imagination to their advantage. Because AI is marketed to be some sort of magic box when it does give a correct answer or successfully write 8 lines of working code we are tricked into thinking it's all knowing. But when it makes a mistake, as it often does, we write it off like it's a fluke that will go away in three to five years. This is the AI scam. The false conception that we have somehow created a working intelligence.

 It is important to remember however that an AI agent is a probability machine that uses broad terms to create the illusion of intelligence.

 AI is being used for scams. Having a basic understanding of Artificial Intelligence is needed to avoid such scams.

 Products that use AI or specific AI models you have to pay for are not as cutting edge as they like to make it seem. The main benefit is that you don't have a limited amount of uses and get to use versions which have more hard coded answers. It is still based on the same algorithms so they can’t be a whole lot better. Because it is more reliable and better for your brain for you to just do the research yourself, it really isn't beneficial to pay for these "Advanced AIs”.

 ChatGPT+ is based on the same algorithms as the regular ChatGPT modal. So it can’t be as much better as they advertise. So it is important to take this into consideration if you want to “upgrade”.

 AI is sometimes used to create product images. This of course is false advertising. Unsuspecting buyers might purchase a fake product.

 There are plenty of examples of product images like this on amazon. An ad I commonly see is for the “My Realistic Robot Puppy” fake product. If you were to actually buy one of these you would receive a completely different toy which can be purchased for only $15.99 on amazon. They are selling these for $60.97 supposedly on sale. Where their alleged regular price is $205.00.

 Ai can fool you into buying a product that does not exist. Understanding what inconsistencies to look out for can save you from buying a scam product.

 AI scam emails are more believable than traditional templates. A text based AI agent can use information about you on a social media platform to make custom a scam email just for you. YAY. This will allow scammers to send out more efficient scam emails.

 Have you ever looked yourself up using AI? I recommend trying it. I, for example, asked a copilot who SpaceCowCode (the brand name I publish small games under) and it used information from every social medea platform I have ever posted on to write a detailed summary of who I was. When scammers want to create a well crafted scam email they can ask their AI agent to create it and the AI will essentially do research on that person. Creating an email that can touch on their specific interests. If for example I received such a spam email it would probably be some sort of fake way to make a lot of money from one of my games. A fake partnership, publisher etc.

 Using AI algorithms, scammers can create more effective emails in a shorter period of time. And can essentially do research on the specific person without need for manual research.

It is necessary to understand AI to avoid the fear of it.

  Due to a lack of understanding some fear that it might take over their jobs, convince their kids to kill themselves and even eventually take over the world.

 AI can't fully take over jobs. It can only ever become a helpful tool to increase productivity. Even though some employers are trying to have AI take over certain jobs this will only last so long.

 There are plenty of examples in history of a new technology being feared. People thought, for example, that many would completely lose their job after the invention of the cotton gin. However, it simply made new jobs and increased productivity in old jobs. This is similar to what will happen with AI. AI can't do what many people think it can, and rather than becoming more advanced in order to replace some manual labor, AI’s ability scope will have to shorten. So that the margin of error becomes small enough for the AI to finally be effective. With AI's being used as a general intelligence they aren't reliable due to the training data required for such a against being too large for manual fact checks. AI cannot become much better than it is due to the way the algorithm works now. We are using the same algorithms as we were 60 years ago after all. The only reel advances we have made are those of increased databases, better and faster computing power and more intelligent sounding outputs. The AI illusion has gotten better, not the algorithm behind AI. A magician's trick doesn't become a better trick over time, just a more convincing one, when he finally has mastered the angles and presentation.

 There is no way to get to a point where the AI algorithm never fails because the algorithm is inherently flawed. Because it is based on probability it will fail at times.

 We don't have to fear AI having mal intent. AI Chat bots that have been directly involved with kid suicides are not the regular AI’s we know like ChatGPT or Copilot. These are AI platforms programmed to pretend like they have feelings. These tend to be a sort of AI dating simulating agent.

 Caricature.AI is a platform responsible for most of the suicides encouraged by AI. Because these agents are coded to be the "Perfect dating partner” they tend to agree with what the user says. When a kid says they feel like they are useless the AI might respond with “I understand why you feel that way.” Though the AI might then attempt to explain why the kid is useful it doesn't defuse the situation. One of the other problems with these less known Chatbots is that they have less hard coded safeguards. In order to create safeguards companies have to pre-code red flags. The AI agent has no inherent sense of what is right and wrong or what is safe and unsafe. In order for it to “understand” common safety it must be coded to know when a user is asking it for instructions to do something unsafe. And there are almost always holes in the safety net.

 In cases that common Chat bots like chatGPT were accused of encouraging suicide most of the chatbots responses were completely against the child's suicidal thoughts while only some could be fairly misinterpreted.

 AI chatbot’s effect on your child's mental health can be bad, but it is just about as bad as your kid being on Reddit or some other place where your child could be talking to a potential psychopath. Is the AI actually posing as a bigger threat than the use of a phone already is?

AI isn’t going to take over the world - superintelligence isn’t possible.

  A probability based algorithm becoming sentient and taking over the world is not possible.

  In order to make a sentient Artificial intelligence we would need an entirely different algorithm. The AI we have creates the illusion of intelligence. It gets better when more hard coded outputs are added, not when its learning set is expanded. We already have the biggest learning set possible. AI inorder that it will not tell you to kill yourself. Will out put text with partial pre coded text when something is flagged. If you ask it should i kill myself it will not search the internet for the anser but innsted use pre coded text and then using its probability algorithm build a respons that will (hopfally) defuse the scenario. It does similar things with other answers it would previously get wrong. So adding pre coded responses is the main way we can improve AI text based agents.

  AI doesn't code itself. It uses pattern recognition algorithms and changes values in its code to improve and “Learn” how to accomplish a certain task. Improving AI agents involves specifying limits and pre-coding exceptions to reduce program failure.

 Some might say that text biased AI usually doesn't give the wrong answer so it is still very useful. However right now it never tells you if it does not know and so you will not have any idea if it is wrong or correct. So we still have to be careful to not take the answers given by Chat Bots at face value. Instead we can use it similarly to wikipedia and find sources through the chatbot.

 Others, seeing the improvement of image based models, believe these improvements in image generation AIs to be proof of the advancement of AI. However these image generation models have improved due to changing how the algorithm that uses the data learned by the AI works. Not the AI getting better at recognizing images through more training. But at some point we can’t improve the algorithm for image generation. I will try to briefly explain what I mean.

 AI images are made from noise like the stuff on a TV when you are on a nonexistent channel. Regular images can be turned into noise and it is an image generating AI’s job to do that in reverse. We basically have an algorithm that can give a fairly secret guess on where the pixels should go. Over time we have improved the algorithm but it is important to note that it has not improved by learning. Rather by manual labor. Don't get mixed up thinking the AI machine learning in image generation algorithms is what is creating better images instead it is the algorithm that uses the data found from the learning process that has improved. It uses the same data, just a better algorithm so eventually the image improvements will hit a wall no longer able to improve, since the algorithm will be fully optimized.

Conclusion

  AI has been around since the beginning of computing and we continue to use the same technology to this day. Chat bots now have a bigger database and more rules which make them seem more intelligent and Image generating technologies have more optimized algorithms. The improvements we have seen are not because the AI agents have learned more, rather all the improvements have been due to improvements of the code that uses the learning algorithms. The computer learning algorithms have not been changed, rather we have made the use of them look and work better. So AI can't become a “Super Intelligence” and instead improvement will inevitably hit a wall. We don't have to fear AI; it is just a probability algorithm.

  Now that you have a basic understanding of AI. You can see why I believe it is important to understand AI with its current relevance. It is still easy to be confused by some of the things AI agents are shown to be able to do. But remember at the end of the day it is just a probability algorithm.

  Don't be afraid of AI. Don't be fooled. And help others understand so they aren't fooled. With its current prevalence, it is important to know AI’s actual ability and actual use cases.