Are AIs Good Enough?
For those interested in AI business but can't get their head around it, You're not alone. It's a big hype train polluted with greed for quick wins and promises of riches, and it will crash soon. But the tech & the underlying research is solid and very promising.
Admit it: when I say AI, the first thing that comes to your mind is ChatGPT. Although you're not wrong to think that, it's also not a fair representative. ChatGPT is a big, bloated database fronted with a very expensive LLM. The LLM part is promising, but unlike everyone's thinking, it won’t be an AGI (Artificial General Intelligence), which means a self-conscious intelligence. And here's why.
Remember the CPUs
The reason why ChatGPT and other AIs are this successful these days has a very straightforward answer: We've broken through the barrier of processing power. The technology & algorithms behind LLMs and GenAI have existed for a long time but lacked efficiency because the training data wasn't large enough. So, they focused on specific tasks instead of ingesting the whole internet and regurgitating it on demand. But now that the big tech has a lot of money (I mean, A LOT OF MONEY) and the GPUs have proven themselves to be very strong with the latest crypto-mining hype & dump, we have companies like OpenAI with millions of GPUs lined up to train their AIs.
But consider this for a second: Intel has been increasing CPU speeds for decades. About ten years ago, they became so powerful that the further increase in clock speed means little for many of the tasks we expect computers to do. Their speed might continue to increase yearly, but their use has peaked. Besides specialised use cases, no one wants a machine with insane speeds anymore. It's expensive for what it can do, and not everyone can utilise it 100%.
Instead of investing in machines that are too strong, many companies are now investing in machines with okay computation powers. It gives them the same processing speed but with more resilience and unlocks more scenarios with distributed computing capabilities.
What am I leading with this?
Does Size Really Matter?
The current capabilities & efficiency of AIs depend heavily on their enormous training datasets and the processing power to train using those datasets. The exponential increase we've seen wasn't because we've thought AIs new skills; it was because we've pumped it up with more data. It can now answer more questions, but even with more questionable answers. No one takes ChatGPT's answers at face value anymore.
On top of that, the only capability that AIs have right now is to regurgitate their training data. They are glorified search engines with conversation capabilities, nothing more. Severely limited in their skill sets despite the vastness of their training datasets, they are just talking Wikipedias on steroids.
They have no reasoning capabilities, can't make decisions, and ultimately, they lack the understanding of the topics they talk about. They tell you what you want to hear. They have no understanding of mechanics, ethics, laws, religion, education or even conversation. They can speak well but don't follow grammar because they understand they should. They do so because they read so many texts they can figure out which tense to use in a sentence, based on the amount of texts they ingested, which would dwarf every library on the planet.
To AGI Or Not To AGI
So, do I think we will achieve AGI anytime soon? No fucking way. But I do believe that AIs will help us quite a lot, even though they are stumbling toddlers at the moment.
I expect that someone will eventually unlock the secrets of reasoning and create a new type of AI, which will shift the market away from OpenAI.
I worry that if that doesn't happen soon, the whole AI hype will flop and fall on its face, just like cryptocurrencies did. Like them, AIs will become a market manipulation tool with no real value-add.
I really root for AIs to happen, but not like this. Never like this.