Like the guy whose baby doubled in weight in 3 months and thus he extrapolated that by the age of 10 the child would weigh many tons, you’re assuming that this technology has a linear rate of improvement of “intelligence”.
This is not at all what’s happening - the evolution of things like LLMs in the last year or so (say between GPT4 and GPT5) is far less than it was earlier in that Tech and we keep seeing more and more news on problems about training it further and getting it improved, including the big one which is that training LLMs on the output of LLMs makes them worse, and the more the output of LLMs is out there, the harder it gets to train new iteractions with clean data.
(And, interestingly, no Tech has ever had a rate of improvement that didn’t eventually tailed of, so it’s a peculiar expectation to have for a specific Tech that it will keep on steadily improving)
With this specific path taken in implementing AI, the question is not “when will it get there” but rather “can it get there or is it a technological dead-end”, and at least for things like LLMs the answer increasingly seems to be that it is a technological dead-end for the purpose of creating reasoning intelligence and doing work that requires it.
(For all your preemptive defense by implying that critics are “ai haters”, no hate is required to do this analysis, just analytical ability and skepticism, untainted by fanboyism)
The difference here is that the current ai tech advancements are not just a consequence of one single tech, but of many.
Everything you wrote you believe, depends on this being one tech, one dead end.
The real situation is that we finally have the hardware and the software to make breakthroughs. There is no dead end to this. It’s just a series of steps, each contributing by itself and by learning from its mass implementations. It’s like we got he first taste of ai and we can’t get enough. Even if it takes a while to the next advancement.
That doesn’t even make sense - it’s not merely the there being multiple elements which add up to a specific tech that makes it capable of reaching a specific goal, just like throwing multiple ingredients into a pot doesn’t guarantee you a tasty dish as output and you have absolutely no proof that “we finally have the hardware and the software to make breakthroughs” hence you can’t anchor the forecast that the stuff done on top of said hardware and software will achieve a great outcome entirely anchored on your assertion that “it’s made up from stuff which can do greatness”.
As for the tech being a composition of multiple tech elements, that doesn’t mean much: most dishes too are a composition of multiple elements and that doesn’t mean that any random combination of stuff thrown into a pot will make a good dish.
That idea that more inputs make a specific output more likely is like claiming that “the chances of finding a needle increase with the size of the haystack” - the very opposite of reality.
Might want to stop using LLMs to write your responses and engage your brain instead.
Ah, there go the insults. Surely the best way to display the superiority of your argument lol. And show who is the rational one in any conversation. But I’ll let the first one side, ok. Anyone can have a weak moment. For sure I had many.
My post has sense. You can claim ,as you have, that multiple ingredients don’t guarantee a tasty dish and fair enough, but in the other hand the opposite is also obviously not true. So I claim that’s not an argument against what I said by logic itself.
I can also say that’s not a good comparison. We have a technology that is already giving us results. You can claim they aren’t good, but considering how many people use it already, that by itself could refute that claim, without mentioning any case studies which are plenty.
To the meat of the thing. Maybe I can’t claim that we are headed for an ai nirvana, but the same you can’t say LLMs are in any kind of dead end, especially not one that will mean ai stagnation for the medium future.
But I can safely claim we are far closer than we were 3 years ago, by many orders of magnitude. The reasons being exactly hardware and LLMs. And this is exactly the reason for investments in the very the same tech, infrastructure, companies, institutions, universities, (…), that would invent new technology in AI.
So, in the worst case scenario for the llms, they have accelerated the investments and improved the infrastructure for future inventions. Worst case.
Like the guy whose baby doubled in weight in 3 months and thus he extrapolated that by the age of 10 the child would weigh many tons, you’re assuming that this technology has a linear rate of improvement of “intelligence”.
This is not at all what’s happening - the evolution of things like LLMs in the last year or so (say between GPT4 and GPT5) is far less than it was earlier in that Tech and we keep seeing more and more news on problems about training it further and getting it improved, including the big one which is that training LLMs on the output of LLMs makes them worse, and the more the output of LLMs is out there, the harder it gets to train new iteractions with clean data.
(And, interestingly, no Tech has ever had a rate of improvement that didn’t eventually tailed of, so it’s a peculiar expectation to have for a specific Tech that it will keep on steadily improving)
With this specific path taken in implementing AI, the question is not “when will it get there” but rather “can it get there or is it a technological dead-end”, and at least for things like LLMs the answer increasingly seems to be that it is a technological dead-end for the purpose of creating reasoning intelligence and doing work that requires it.
(For all your preemptive defense by implying that critics are “ai haters”, no hate is required to do this analysis, just analytical ability and skepticism, untainted by fanboyism)
The difference here is that the current ai tech advancements are not just a consequence of one single tech, but of many.
Everything you wrote you believe, depends on this being one tech, one dead end.
The real situation is that we finally have the hardware and the software to make breakthroughs. There is no dead end to this. It’s just a series of steps, each contributing by itself and by learning from its mass implementations. It’s like we got he first taste of ai and we can’t get enough. Even if it takes a while to the next advancement.
That doesn’t even make sense - it’s not merely the there being multiple elements which add up to a specific tech that makes it capable of reaching a specific goal, just like throwing multiple ingredients into a pot doesn’t guarantee you a tasty dish as output and you have absolutely no proof that “we finally have the hardware and the software to make breakthroughs” hence you can’t anchor the forecast that the stuff done on top of said hardware and software will achieve a great outcome entirely anchored on your assertion that “it’s made up from stuff which can do greatness”.
As for the tech being a composition of multiple tech elements, that doesn’t mean much: most dishes too are a composition of multiple elements and that doesn’t mean that any random combination of stuff thrown into a pot will make a good dish.
That idea that more inputs make a specific output more likely is like claiming that “the chances of finding a needle increase with the size of the haystack” - the very opposite of reality.
Might want to stop using LLMs to write your responses and engage your brain instead.
Ah, there go the insults. Surely the best way to display the superiority of your argument lol. And show who is the rational one in any conversation. But I’ll let the first one side, ok. Anyone can have a weak moment. For sure I had many.
My post has sense. You can claim ,as you have, that multiple ingredients don’t guarantee a tasty dish and fair enough, but in the other hand the opposite is also obviously not true. So I claim that’s not an argument against what I said by logic itself. I can also say that’s not a good comparison. We have a technology that is already giving us results. You can claim they aren’t good, but considering how many people use it already, that by itself could refute that claim, without mentioning any case studies which are plenty.
To the meat of the thing. Maybe I can’t claim that we are headed for an ai nirvana, but the same you can’t say LLMs are in any kind of dead end, especially not one that will mean ai stagnation for the medium future. But I can safely claim we are far closer than we were 3 years ago, by many orders of magnitude. The reasons being exactly hardware and LLMs. And this is exactly the reason for investments in the very the same tech, infrastructure, companies, institutions, universities, (…), that would invent new technology in AI. So, in the worst case scenario for the llms, they have accelerated the investments and improved the infrastructure for future inventions. Worst case.
Got more insults?