Tag Archive: Programming

Misplaced Pessimism about AI

It's hard to believe how much we've gained with AI and LLMs in the past couple of years, and now how much more it's expected to happen.

We've become so accustomed to continuous progress that at this point it's very easy to disappoint expectations with anything less than another major advancement that is deployed to the masses.

I do understand that a lot of cheesy hype has built-up, mostly because it's how business works. All sort of businesses are rushing to capitalize on the new buzzword, but this doesn't mean that we don't already have revolutionary technology that is already changing the world.

If all LLMs' development stopped today, we'd still have models like Llama 3.1 8B runnable on a laptop, and GPT-4-level models sometimes Open Source running externally with costs that are dropping rapidly (if we can predict one thing is that hardware keeps getting faster and cheaper).

What we can do today already is nothing short of amazing. There is so much untapped potential with the existing models, Open Source and not, and the only reason why this potential hasn't materialized is that we simply didn't have the time to work with it.

This is a good sign, because it means that things are evolving rapidly and there's little time to settle on an AI model, but if the rate of improvement of LLMs is truly slowing down or possibly reaching a plateau, then we, the developers/hackers/engineers, would gladly start to work towards making the best of what we've got.

Parallels with game console development

An example of this are game consoles, at least how they used to be, when every console had some new esoteric and quirky hardware developed perhaps on a hunch and under some heavy constraints by the hardware engineers and let loose to the programmers to make the best of it.

New games for a new console were usually the worst games in terms of technology, as the programmers had to struggle between learning the new hardware and getting the game out of the door.
Then, with the first releases out of the way, the developers would start to really hack the hardware, going lower level and starting to pull performance where possible.
As a result, games would start to look better and better, while the consoles would become less expensive to produce.

Count on hackers

There are many engineers/hackers out there that can and will make the best of what AI has to offer, even if progress in the field were to suddenly stop today. In fact, some of us may feel more at ease if everything slowed down a bit and we had a chance to build something on top of a relatively stable platform with not so many moving parts.

I don't think that AI (LLMs or whatever comes next) is about to slow down, but if it comes to that, we already have plenty to work with for years to come.

Computer code, probably instrumental for AGI

Commodore VIC-20 (1981)

While I wouldn't consider myself an AI expert, I've been working with machine learning for a few years and have formed some understanding of the subject.

My journey with computers began with a Commodore VIC-20, attempting to communicate with it in natural language, inspired by the movie WarGames (1983). The initial disappointment from the inability to extract functionality from this tool turned into a challenge that led me to learn programming. Since then, I've had ample time to ponder logic, intelligence, problem-solving, and the essence of being intelligent and sentient, and just how far we were from creating something that could pass the Turing test. We're well past that now, and talking about AGI (Artificial General Intelligence) is legitimate, if not necessary.

Computer code as foundation for AGIs

A key realization for me has been the integral role of hands-on experience in developing a true understanding of a subject, and sometimes of a mindset. I've found that deep comprehension of complex topics often demands more than study; it requires building (often more than once) the subject matter in code. While learning styles vary, the act of creation undeniably deepens understanding. Even for those adept at absorbing knowledge from reading, the kind of comprehension that forms a foundation for further intellect often comes from practical engagement, where the journey to reach a goal is more important than the goal itself.

I believe this process of intellectual growth through implementation and creation is essential in building an AGI, and software development provides the ideal playground. Large Language Models (LLMs) have reportedly improved by learning from computer code, which is more structured than human languages. This suggests that code should remain a key resource for advancing AI systems.

An AI system with experience in building software would make a much better companion than one that can simply recall and implement things it's read about. This is akin to the difference between a wiz kid who can ace tests and a seasoned engineer who can guide you through every step of the process based on personal experience, highlighting not just the methods, but also the rationale behind them, while anticipating many of the pitfalls.

Software development has also a recursive component to it. In a previous article ("Next level thinking ?"), I mentioned how, in my opinion, recursive thinking is a fundamental building block of our intelligence.

The power of recursion here comes by way of being able to leverage software to build more complex software, as well as building the testbed for virtually any simulation that reflects the physical world. The more accurate the simulation, the less we need to rely on testing in the physical world. Testing in the physical world is not scalable, it can require a lot of resources and it's often destructive... imagine crash testing for the safety of a car: a fairly accurate simulation can drastically cut the requirement for testing. Simulation may be hard to implement, but it can allow to run a large amount of tests for a large number of configurations, more than it would be humanly possible (see also "Simulation: from weapon systems to trading").

Don't give up on programming just yet

I also think that while the ability to deal with human language is fantastic, communicating in computer code is probably more efficient in many cases. Often I ask LLMs to give me some pseudo code rather than a long explanation or a series of bullet points. Code can be more concise, it's definitely less ambiguous, and it's more of a direct building block to further knowledge and understanding.

For this reason, I also think that programming is not necessarily dead. Logical languages, such as computer languages, in many cases may become a better global language than English. Human languages are still important at the historical level, they are a reflection of what we are, but they are never going to be as efficient and precise as a rigidly structured language that runs in a digital environment.

It should be noted that OpenAI has recently introduced "Code Interpreter", which gives the ability to execute the generated Python code. As well as "function calling", which introduces pseudo-code as a way to obtain better structured answers. This level of efficiency can't possibly be replaced by human languages that maybe be a good interface between humans, but that are not very efficient nor precise when it comes to describe systems that are more analytical in nature.