Tag Archive: AI

The irony of LLM hallucinations


The advent of LLMs (Large Language Models) has been nothing short of revolutionary. Building intelligence from text (and code), is something that I didn't think would be likely. One may argue about the essence of it, but the result is undeniable, and it's only a start.

We now have a seed of alien intelligence, and it's something that is improving possibly at an exponential rate. This is the real deal, but it comes with some flaws.
One common complaint about LLMs is that of the "hallucinations" that they can produce. An hallucination in this context is generated information that is patently untrue, presented without any hesitation. It's a kind of delivery that our human brain finds uncharacteristic of an intelligent being.

This is something that I think it's probably already fixable (see my ChatAI project) with some forms of cross-referencing, and it's not yet deployed due to resources required. I don't consider this to be a major issue for the future, but it's something that got me thinking...

I think that it's ironic how quickly we point the finger at the flaws of these systems, while at the same time we're so inherently flawed to such a depth that we don't yet fully realize it. As AI will improve, this will become more evident, and at some point we'll have to do some introspection and see if we can afford to go on as we have had so far.

Humans live in a bubble of total delusion, both at the individual and at the mass level. Our delusion is not simply an existential one, which would be a noble thing, but it's lower level than that: we lie to ourselves and to others on a constant basis due to tribalism and indoctrination that we receive from the day that we're born.

School, corporations, governments, religious groups, politicians, journalists, experts, scholars, you name it. There's a constant stream of delusional, selfish, malicious or clueless people that poison the well at the higher level, constantly crippling society.
Corruption, thirst for power, idealism, anything for which "the end justifies the means" is usually a sign that something is rotten and is going to hold back progress.

Perhaps we thought that in the information age things would get better, but what we got is information overload, and most of it is biased and purposely given to us to steer us in one direction or another.
The information age clearly didn't bring the sort of enlightenment that we may have hoped for, but perhaps the AI models (especially the open sourced ones) will start to help the individual to deal with the problem of information overload that has been crippling us.

Regardless, I think that we should be more humble when we criticize the flaws of the current AI models, and take that a as jumping point to do a little more introspection and realize how much we can and should improve ourselves.

I know that AI will improve. The question is whether we will improve as well.

Computer code, probably instrumental for AGI

Commodore VIC-20 (1981)

While I wouldn't consider myself an AI expert, I've been working with machine learning for a few years and have formed some understanding of the subject.

My journey with computers began with a Commodore VIC-20, attempting to communicate with it in natural language, inspired by the movie WarGames (1983). The initial disappointment from the inability to extract functionality from this tool turned into a challenge that led me to learn programming. Since then, I've had ample time to ponder logic, intelligence, problem-solving, and the essence of being intelligent and sentient, and just how far we were from creating something that could pass the Turing test. We're well past that now, and talking about AGI (Artificial General Intelligence) is legitimate, if not necessary.

Computer code as foundation for AGIs

A key realization for me has been the integral role of hands-on experience in developing a true understanding of a subject, and sometimes of a mindset. I've found that deep comprehension of complex topics often demands more than study; it requires building (often more than once) the subject matter in code. While learning styles vary, the act of creation undeniably deepens understanding. Even for those adept at absorbing knowledge from reading, the kind of comprehension that forms a foundation for further intellect often comes from practical engagement, where the journey to reach a goal is more important than the goal itself.

I believe this process of intellectual growth through implementation and creation is essential in building an AGI, and software development provides the ideal playground. Large Language Models (LLMs) have reportedly improved by learning from computer code, which is more structured than human languages. This suggests that code should remain a key resource for advancing AI systems.

An AI system with experience in building software would make a much better companion than one that can simply recall and implement things it's read about. This is akin to the difference between a wiz kid who can ace tests and a seasoned engineer who can guide you through every step of the process based on personal experience, highlighting not just the methods, but also the rationale behind them, while anticipating many of the pitfalls.

Software development has also a recursive component to it. In a previous article ("Next level thinking ?"), I mentioned how, in my opinion, recursive thinking is a fundamental building block of our intelligence.

The power of recursion here comes by way of being able to leverage software to build more complex software, as well as building the testbed for virtually any simulation that reflects the physical world. The more accurate the simulation, the less we need to rely on testing in the physical world. Testing in the physical world is not scalable, it can require a lot of resources and it's often destructive... imagine crash testing for the safety of a car: a fairly accurate simulation can drastically cut the requirement for testing. Simulation may be hard to implement, but it can allow to run a large amount of tests for a large number of configurations, more than it would be humanly possible (see also "Simulation: from weapon systems to trading").

Don't give up on programming just yet

I also think that while the ability to deal with human language is fantastic, communicating in computer code is probably more efficient in many cases. Often I ask LLMs to give me some pseudo code rather than a long explanation or a series of bullet points. Code can be more concise, it's definitely less ambiguous, and it's more of a direct building block to further knowledge and understanding.

For this reason, I also think that programming is not necessarily dead. Logical languages, such as computer languages, in many cases may become a better global language than English. Human languages are still important at the historical level, they are a reflection of what we are, but they are never going to be as efficient and precise as a rigidly structured language that runs in a digital environment.

It should be noted that OpenAI has recently introduced "Code Interpreter", which gives the ability to execute the generated Python code. As well as "function calling", which introduces pseudo-code as a way to obtain better structured answers. This level of efficiency can't possibly be replaced by human languages that maybe be a good interface between humans, but that are not very efficient nor precise when it comes to describe systems that are more analytical in nature.