Have you noticed that every AI company leader says that [insert big technology break here] is just over the horizon? Have you noticed how long they've been saying it? That's because AI is a lie.
Have you noticed how frequently AI returns results that are wildly incorrect (hallucinations) or overly flattering (sycophantic)? That's because AI is a lie.
You might think that I'm engaging in a bit of hyperbole, but I am not. Indeed, the concept of Artificial Intelligence, at least as it stands today, is the genuine hyperbole. To illustrate my point, I'd like to fall back on a couple of definitions. 1) Intelligence: The ability to learn, understand, reason, solve problems, and adapt to new situations, using capacities like logic, abstract thinking, and using knowledge effectively for survival and success in diverse environments. 2) Hyperbole: Exaggerated statements or claims not meant to be taken literally. Until Artificial Intelligence can adapt to new situations and apply abstract thinking, AI is hyperbole because it's not actually intelligent.
I'd also like to clarify that Artificial Intelligence has two distinct and separate aspects: 1) Large Language Model (LLM) is the the technology that allows a computer to present information in a manner that appears conversational. 2) Machine Learning (ML) is the technology that allows computers to identify patterns and make predictions. Neither of these aspects of AI are truly intelligence; rather, they are incredibly complex computer programs that work because of an unfathomable amount of available data, and a phenomenal amount of human oversight and error correction. To paraphrase an IT Security friend and colleague of mine, AI is basically fancy Google. You ask the question, and you receive a response which may or may not be factual and practical.
AI can do a few things with tremendous efficiency. For example, it can paraphrase or summarize an existing article, paper or piece of literary work based on a massive database of dictionary definitions and a meticulously-defined set of linguistic rules. This is how LLM AI works. It can also make predictions based on historical data. That's how Machine Learning works. AI is great for performing mundane, repetitive tasks that require no creativity or abstract thinking. However, I do not envision a world where computers independently and spontaneously create the next great invention, such as the automobile, or the computer. (See what I did there??)
Let me use automated manufacturing as a lousy analogy. In my example, a set of robotic arms is designed to weld two pieces of metal together. Without explicit training, these robotic arms will not know if the weld is accurate and strong. These arms would not know they're out of flux without sensors and programming. These robotic arms would not innately know that trying to weld two pieces of wood together would not work. By extension, LLM AI cannot spontaneously write the next great novel, and Generative AI cannot create an intricate piece of art without being fed some sort of parameters.
As I mentioned earlier, two of the greatest shortcomings of AI are hallucinations and sycophancy. Both of these deficiencies are based on limits that cannot be easily overcome. In a well-known example, someone once asked AI how to keep cheese from sliding off of a slice of pizza, and the AI chatbot said to use glue. From my understanding, this answer came from a Reddit post, where someone humorously used this advice for a similar question. It's factually correct of course, but it's a hallucination because the AI bot didn't understand that glue is inedible. Sycophancy is a result of the human inability to recognize and agree on basic facts. (Human stupidity trumps artificial intelligence every time.)
The root problem is that Artificial Intelligence has been marketed as a panacea, and society has eagerly bought into that lie. AI still has great potential, but we need to acknowledge the limits of the technology. Until that happens, AI is a lie.