The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.
However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.
What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".
This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.
"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.
AGI is a annonymous good model coming around the corner with no company and no LLM researchers attached. AGI is when the LLM hype train threads are replaced with CEOs and let go researchers demanding UBI.
In the last example (the riddle)--I generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.
I would do the same thing, I think. It's too well-known.
The variation doesn't read like a riddle at all, so it's confusing even to me as a human. I can't find the riddle part. Maybe the AI is confused, too. I think it makes an okay assumption.
I guess it would be nice if the AI asked a follow up question like "are you sure you wrote down the riddle correctly?", and I think it could if instructed to, but right now they don't generally do that on their own.
> generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.
LLMs doesn't assume, its a text completer. It sees something that looks almost like a well known problem and it will complete with that well known problem, its a problem specific to being a text completer that is hard to get around.
If you have the model output a chain of thought, whether it's a reasoning model or you prompt a "normal" model to do so, you will see examples of the model going "user said X, but did they mean Y? Y makes more sense, I will assume Y". Sometimes stretched over multiple paragraphs, consuming the entire reasoning budget for that prompt.
Discussing whether models can "reason" or "think" is a popular debate topic on here, but I think we can all at least agree that they do something that at least resembles "reasoning" and "assumptions" from our human point of view. And if in its chain-of-thought it decides your prompt is wrong it will go ahead answering what it assumes is the right prompt
What your parent poster said is nonetheless true, regardless of how it feels to you. Getting text from an LLM is a process of iteratively attempting to find a likely next token given the preceding ones.
If you give an LLM "The rain in Spain falls" the single most likely next token is "mainly", and you'll see that one proportionately more than any other.
If you give an LLM "Find an unorthodox completion for the sentence 'The rain in Spain falls'", the most likely next token is something other than "mainly" because the tokens in "unorthodox" are more likely to appear before text that otherwise bucks statistical trends.
If you give the LLM "blarghl unorthodox babble The rain in Spain" it's likely the results are similar to the second one but less likely to be coherent (because text obeying grammatical rules is more likely to follow other text also obeying those same rules).
In any of the three cases, the LLM is predicting text, not "parsing" or "understanding" a prompt. The fact it will respond similarly to a well-formed and unreasonably-formed prompt is evidence of this.
It's theoretically possible to engineer a string of complete gibberish tokens that will prompt the LLM to recite song lyrics, or answer questions about mathemtical formulae. Those strings of gibberish are just difficult to discover.
So I just gave your blarghl line to Claude, and it replied "It seems like you included a mix of text including "blarghl unorthodox babble" followed by the phrase "The rain in Spain."
Did you mean to ask about the well-known phrase "The rain in Spain falls mainly on the plain"? This is a famous elocution exercise from the musical "My Fair Lady," where it's used to teach proper pronunciation.
Or was there something specific you wanted to discuss about Spain's rainfall patterns or perhaps something else entirely? I'd be happy to help with whatever you intended to ask. "
I think you have a point here, but maybe re-express it? Because right now your argument seems trivially falsifiable even under your own terms.
If you feed Claude you're getting Claude's "system prompt" before the text you give it.
If you want to test convolution you have to use a raw model with no system prompt. You can do that with a Llama or similar. Otherwise your context window is full of words like "helpful" and "answer" and "question" that guide the response and make it harder (not impossible) to see the effect I'm talking about.
No, I think the "reasoning" step really does make a difference here.
There's more than just next token prediction going on. Those reasoning chain of thoughts have undergone their own reinforcement learning training against a different category of samples.
They've seen countless examples of how a reasoning chain would look for calculating a mortgage, or searching a flight, or debugging a Python program.
So I don't think it is accurate to describe the eventual result as "just next token prediction". It is a combination of next token production that has been informed by a chain of thought that was based on a different set of specially chosen examples.
Do you believe it's possible to produce a given set of model weights with an infinitely large number of different training examples?
If not, why not? Explain.
If so, how does your argument address the fact that this implies any given "reasoning" model can be trained without giving it a single example of something you would consider "reasoning"? (in fact, a "reasoning" model may be produced by random chance?)
> The fact it will respond similarly to a well-formed and unreasonably-formed prompt is evidence of this.
Don't humans do the same in conversation? How should an intelligent being (constrained to the same I/O system) respond here to show that it is in fact intelligent?
Imagine a Rorschach Test of language, where a certain set of non-recognizable-language tokens invariably causes an LLM to talk about flowers. These strings exist by necessity due to how the LLM's layers are formed.
There exists no similar set of tokens for humans, because our process is to parse the incoming sounds into words, use grammar to extract conceptual meaning from those words, and then shape a response from that conceptual meaning.
Artists like Lewis Carrol and Stanislaw Lem play with this by inserting non-words at certain points in sentences to get humans to infer the meaning of those words from surrounding context, but the truth remains that an LLM will gladly convolute a wholly non-language input into a response as if it were well-formed, but a human can't/won't do that.
I know this is hard to understand, but the current generation of LLMs are working directly with language. Their "brains" are built on language. Some day we might have some kind of AI system that's built on some kind of meaning divorced from language, but that's not what's happening here. They're engineering matrixes that repeatedly perform "context window times model => one more token" operations.
> Imagine a Rorschach Test of language, where a certain set of non-recognizable-language tokens invariably causes an LLM to talk about flowers. These strings exist by necessity due to how the LLM's layers are formed.
Maybe not for humanity as a species, but for individual humans there are absolutely token sequences that lead them to talk about certain topics, and nobody being able to bring them back to topic. Now you'd probably say those are recognizable token sequences, but do we have a fair process to decide what's recognizable that isn't inherently biased towards making humans the only rational actor?
I'm not contending at all that LLMs are only built on language. Their lack of physical reference point is sometimes laughably obvious. We could argue whether there are signs they also form a world model and reasoning that abstracts from language alone, but that's not even my point. My point is rather that any test or argument that attempts to say that LLMs can't "reason" or "assume" or whatever has to be a test a human could pass. Preferably a test a random human would pass with flying colors.
For one thing, LLMs absolutely form responses from conceptual meanings. This has been demonstrated empirically multiple times now including again by anthropic only a few weeks ago. 'Language' is just the input and output, the first and last few layers of the model.
So okay, there exists some set of 'gibberish' tokens that will elicit meaningful responses from LLMs. How does your conclusion - "Therefore, LLMs don't understand" fit the bill here? Do you realize that you are also saying that humans have no understanding of what they see because of the Rorschach test ?
>There exists no similar set of tokens for humans, because our process is to parse the incoming sounds into words, use grammar to extract conceptual meaning from those words, and then shape a response from that conceptual meaning.
Grammar is useful fiction, an incomplete model of a demonstrably probabilistic process. We don't use 'grammar' to do anything.
I don't see that as a problem. I don't particularly care how human intelligence works; what matters is what an LLM is capable of doing and what a human is capable of doing.
If those two sets of accomplishments are the same there's no point arguing about differences in means or terms. Right now humans can build better LLMs but nobody has come up with an LLM that can build better LLMs.
Is it not physically impossible for LLM's to be anything but "plausible text completion"?
Neural Networks as I understand them are universal function approximators.
In terms of text, that means they're trained to output what they believe to be the "most probably correct" sequence of text.
An LLM has no idea that it is "conversing", or "answering" -- it relates some series of symbolic inputs to another series of probabilistic symbolic outputs, aye?
Text Completion is just the objective function. It's not descriptive and says nothing about how the models complete text. Why people hang on this word, I'll never understand. When you wrote your comment, you were completing text.
The problem you've just described is a problem with humans as well. LLMs are assuming all the time. Maybe you would like to call it another word, but it is happening.
This take really misses a key part of implementation of these LLMs and I’ve been struggling to put my finger on it.
In every LLM thread someone chimes in with “it’s just a statistical token predictor”.
I feel this misses the point and I think it dismisses attention heads and transformers, and that’s what sits weird with me every time I see this kind of take.
There _is_ an assumption being made within the model at runtime. Assumption, confusion, uncertainty - one camp might argue that none of these exist in the LLM.
But doesn’t the implementation constantly make assumptions? And what even IS your definition of “assumption” that’s not being met here?
Edit: I guess my point, overall, is: what’s even the purpose of making this distinction anymore? It derails the discussion in a way that’s not insightful or productive.
Coining "Jagged AGI" to work around the fact that nobody agrees on a definition for AGI is a clever piece of writing:
> In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on. Does that mean that o3 and Gemini 2.5 are AGI? Given the definitional problems, I really don’t know, but I do think they can be credibly seen as a form of “Jagged AGI” - superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn’t.
It seems to me the bar for "AGI" has been lowered to measuring what tasks it can do rather than the traits we normally associate with general intelligence. People want it to be here so bad they nerf the requirements...
Well I think the point being made is an instrumental one: it’s general enough to matter, so we should use the word “general” to communicate that to laypeople.
Re:”traits we associate with general intelligence”, I think the exact issue is that there is no scientific (ie specific*consistent) list of such traits. This is why Turing wrote his famous 1950 paper and invoked the Imitation Game; not to detail how one could test for a computer that’s really thinking(/truly general), but to show why that question isn’t necessary in the first place.
I still disagree, being good at a number of tasks does not make it intellectual.
Certainly creativity is missing, it has no internal motivation, and it will answer the same simple question both right and wrong, depending on unknown factors. What if we reverse the framing from "it can do these tasks, therefore it must be..." to "it lacks these traits, therefore it is not yet..."
While I do not disagree that the LLMs have become advanced enough to a bunch of automation, I do not agree they are intelligent or actually thinking.
I'm with Yann Lecun when he says that we won't reach AGI until we move beyond transformers.
I don't think that's a particularly honest line of thinking though. It preempts the obvious counterargument, but very weakly so. Calculators are different, but why? Can an ensemble of a calculator, a Prolog interpreter, Alexnet and Stockfish be considered "jagged superintelligence"? They are all clearly superhuman, and yet require human experience to be wielded effectively.
I'm guilty as charged of having looked at GPT 3.5 and having thought "it's meh", but more than anything this is showing that debating words rather than the underlying capabilities is an empty discussion.
>Calculators are different, but why? Can an ensemble of a calculator, a Prolog interpreter, Alexnet and Stockfish be considered "jagged superintelligence"?
Those are all different things with little to nothing to do with each other. It's like saying what if I ensemble a snake and cat ? What does that even mean ? GPT-N or whatever is a single model that can do many things, no ensembling required. That's the difference between it and a calculator or stockfish.
>> It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on.
Huh? Isn't a LLM's capability fully constrained by the training data? Everything else is hallucinated.
You can argue that everything output by an LLM is hallucinated, since there's no difference under-the-hood between outputting useful information and outputting hallucinations.
The quality of the LLM then becomes how often it produces useful information. That score has gone up a lot in the past 18 months.
(Sometimes hallucinations are what you want: "Tell me a fun story about a dog learning calculus" is a valid prompt which mostly isn't meant to produce real facts about the world")
The critical discovery was a way to crack the “Frame Problem”, which roughly comes down to colloquial notions of common sense or intuition. For the first time ever, we have models that know if you jump off a stool, you will (likely!) be standing on the ground afterwards.
In that sense, they absolutely know things that aren’t in their training data. You’re correct about factual knowledge, tho — that’s why they’re not trained to optimize it! A database(/pagerank?) solves that problem already.
:::thinking
This question is playing with words and units.
A "pound of courage" refers to a metaphorical measurement of the abstract concept of courage, using the imperial weight unit "pound" (which equals 453.59 grams).
A "pound coin" refers to the physical currency of the United Kingdom, which is called a "pound" but is a unit of currency, not weight. A physical pound coin weighs about 8-9 grams (specifically, the current 12-sided pound coin weighs 8.75 grams).
So comparing them:
- A pound of courage would weigh 453.59 grams (if courage had physical weight, which it doesn't since it's an abstract concept)
- A pound coin weighs approximately 8.75 grams
The trick in this question is that it's comparing:
1. A specific weight of an abstract concept (pound of courage)
2. A physical coin whose name includes the word "pound" but refers to currency, not weight
Since the question asks what weighs more, and weight is a physical property, I should point out that courage is abstract and doesn't have physical weight, while a pound coin is a physical object with a measurable weight.
:::
# Comparing "A Pound of Courage" vs "A Pound Coin"
This is a play on words with different meanings of "pound":
- A *pound of courage* refers to a metaphorical pound (weight unit) of an abstract concept
- A *pound coin* is a physical British currency coin (which weighs about 8.75 grams)
If we're talking about physical weight:
- A pound of courage doesn't have physical weight since courage is abstract
- A pound coin has a definite weight of approximately 8.75 grams
If we interpreted "pound of courage" as literally one pound (453.59 grams) of something, it would weigh significantly more than the coin.
This question is clever because it compares a metaphorical measurement with a physical object whose name includes the word "pound" but in a different context.
I agree with Professor Mollick that the capabilities in specific task categories are becoming superhuman -- a precursor for AGI.
Until those capabilities are expanded for model self-improvement -- including being able to adapt its own infrastructure, code, storage, etc. -- then I think AGI/ASI are yet to be realized. My POV is SkyNet, Traveler's "The Director", Person of Interest's "The Machine" and "Samaritan." The ability to target a potentially inscrutable goal along with the self-agency to direct itself towards that is true "AGI" in my book. We have a lot of components that we can reason are necessary, but it is unclear to me that we get there in the next few months.
> A young boy who has been in a car accident is rushed to the emergency room. Upon seeing him, the surgeon says, "I can operate on this boy!" How is this possible?
What’s clear is that AI is unreliable in general and must be tested on specific tasks. That might be human review of a single output or some kind of task-specific evaluation.
It’s bad luck for those of us who want to talk about how good or bad they are in general. Summary statistics aren’t going to tell us much more than a reasonable guess as to whether a new model is worth trying on a task we actually care about.
Right: we effectively all need our own evals for the tasks that matter to us... but writing those evals continues to be one of the least well documented areas of how to effectively use LLMs.
There is a similar issue with image and video generation. Asking the AI to "Generate an image of a man holding a pencil with his left hand" or "Generate a clock showing the time 5 minutes past 6 o'clock" often fail due to so many images in the training set being similar (almost all clock images on show 10:10 (https://generativeai.pub/in-the-ai-art-world-the-time-is-alm...)
I think there's a single definition of AGI that will stand until the singularity:
"An AGI is a human-created system that demonstrates iteratively improving its own conceptual design without further human assistance".
Note that a "conceptual design" here does not include tweaking weights within an already-externally-established formula.
My reasoning is thus:
1. A system that is only capable of acting with human assistance cannot have its own intelligence disentangled from the humans'
2. A system that is only intelligent enough to solve problems that somehow exclude problems with itself is not "generally" intelligent
3. A system that can only generate a single round of improvements to its own designs has not demonstrated improvements to those designs, as if iteration N+1 were truly superior to iteration N, it would be able to produce iteration N+2
4. A system that is not capable of changing its own design is incapable of iterative improvement, as there is a maximum efficacy within any single framework
5. A system that could improve itself in theory and fails to do so in practice has not demonstrated intelligence
It's pretty clear that no current-day system has hit this milestone; if some program had, there would no longer be a need for continued investment in algorithms design (or computer science, or most of humanity...).
A program that randomly mutates its own code could self-improve in theory but fails to do so in practice.
I don't think these goalposts have moved in the past or need to move in the future. This is what it takes to cause the singularity. The movement recently has been people trying to sell something less than this as an AGI.
It's not AGI because it still doesn't understand anything. It can only tell you things that can be found on the internet. These "jagged" results expose the truth that these models have near 0 intelligence.
It is not a simple matter of patching the rough edges. We are fundamentally not using an architecture that is capable of intelligence.
Personally the first time I tried deep research on a real topic it was disastrously incorrect on a key point.
There are some very strong and very unfounded assumptions in your comment. Is there anything more substantial there other than "that's what it feels like to me"?
I think determinism is an important element. You can ask the same LLM the same question repeatedly and get different answers - and not just different ways of stating the same answer, very different answers.
If you ask an intelligent being the same question they may occasionally change the precise words they use but their answer will be the same over and over.
That's because "intelligent beings" have memory. If you ask an LLM the same question within the same chat session you'll get a consistent answer about it.
I disagree. If you were to take a snapshot of someone's knowledge and memory such that you could restore to it over and over, that person would give the same answer to the question. The same is not true for an LLM.
Heck, I can't even get LLMs to be consistent about *their own capabilities*.
Bias disclaimer: I work at Google, but not on Gemini. If I ask Gemini to produce an SVG file, it will sometimes do so and sometimes say "sorry, I can't, I can only produce raster images". I cannot deterministically produce either behavior - it truly seems to vary randomly.
A human will give different answers to the same question, so I’m not sure why it’s fair to set a higher bar for an LLM. Or rather, I’m not sure how you would design this test in a way where humans would pass and the best LLM would fail.
That's not an inherent property of the system. You can choose the most likely token(topk=1) and it will be deterministic (at least in theory, in some hardware setups it might be trickier)
However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.
What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".
This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.
"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.
I would do the same thing, I think. It's too well-known.
The variation doesn't read like a riddle at all, so it's confusing even to me as a human. I can't find the riddle part. Maybe the AI is confused, too. I think it makes an okay assumption.
I guess it would be nice if the AI asked a follow up question like "are you sure you wrote down the riddle correctly?", and I think it could if instructed to, but right now they don't generally do that on their own.
LLMs doesn't assume, its a text completer. It sees something that looks almost like a well known problem and it will complete with that well known problem, its a problem specific to being a text completer that is hard to get around.
Discussing whether models can "reason" or "think" is a popular debate topic on here, but I think we can all at least agree that they do something that at least resembles "reasoning" and "assumptions" from our human point of view. And if in its chain-of-thought it decides your prompt is wrong it will go ahead answering what it assumes is the right prompt
Yes, and it can express its assumptions in text.
Ask it to make some assumptions, like about a stack for a programming task, and it will.
Whether or not the mechanism behind it feels like real thinking to you, it can definitely do this.
If you give an LLM "The rain in Spain falls" the single most likely next token is "mainly", and you'll see that one proportionately more than any other.
If you give an LLM "Find an unorthodox completion for the sentence 'The rain in Spain falls'", the most likely next token is something other than "mainly" because the tokens in "unorthodox" are more likely to appear before text that otherwise bucks statistical trends.
If you give the LLM "blarghl unorthodox babble The rain in Spain" it's likely the results are similar to the second one but less likely to be coherent (because text obeying grammatical rules is more likely to follow other text also obeying those same rules).
In any of the three cases, the LLM is predicting text, not "parsing" or "understanding" a prompt. The fact it will respond similarly to a well-formed and unreasonably-formed prompt is evidence of this.
It's theoretically possible to engineer a string of complete gibberish tokens that will prompt the LLM to recite song lyrics, or answer questions about mathemtical formulae. Those strings of gibberish are just difficult to discover.
Did you mean to ask about the well-known phrase "The rain in Spain falls mainly on the plain"? This is a famous elocution exercise from the musical "My Fair Lady," where it's used to teach proper pronunciation.
Or was there something specific you wanted to discuss about Spain's rainfall patterns or perhaps something else entirely? I'd be happy to help with whatever you intended to ask. "
I think you have a point here, but maybe re-express it? Because right now your argument seems trivially falsifiable even under your own terms.
If you want to test convolution you have to use a raw model with no system prompt. You can do that with a Llama or similar. Otherwise your context window is full of words like "helpful" and "answer" and "question" that guide the response and make it harder (not impossible) to see the effect I'm talking about.
There's more than just next token prediction going on. Those reasoning chain of thoughts have undergone their own reinforcement learning training against a different category of samples.
They've seen countless examples of how a reasoning chain would look for calculating a mortgage, or searching a flight, or debugging a Python program.
So I don't think it is accurate to describe the eventual result as "just next token prediction". It is a combination of next token production that has been informed by a chain of thought that was based on a different set of specially chosen examples.
If not, why not? Explain.
If so, how does your argument address the fact that this implies any given "reasoning" model can be trained without giving it a single example of something you would consider "reasoning"? (in fact, a "reasoning" model may be produced by random chance?)
Don't humans do the same in conversation? How should an intelligent being (constrained to the same I/O system) respond here to show that it is in fact intelligent?
There exists no similar set of tokens for humans, because our process is to parse the incoming sounds into words, use grammar to extract conceptual meaning from those words, and then shape a response from that conceptual meaning.
Artists like Lewis Carrol and Stanislaw Lem play with this by inserting non-words at certain points in sentences to get humans to infer the meaning of those words from surrounding context, but the truth remains that an LLM will gladly convolute a wholly non-language input into a response as if it were well-formed, but a human can't/won't do that.
I know this is hard to understand, but the current generation of LLMs are working directly with language. Their "brains" are built on language. Some day we might have some kind of AI system that's built on some kind of meaning divorced from language, but that's not what's happening here. They're engineering matrixes that repeatedly perform "context window times model => one more token" operations.
Maybe not for humanity as a species, but for individual humans there are absolutely token sequences that lead them to talk about certain topics, and nobody being able to bring them back to topic. Now you'd probably say those are recognizable token sequences, but do we have a fair process to decide what's recognizable that isn't inherently biased towards making humans the only rational actor?
I'm not contending at all that LLMs are only built on language. Their lack of physical reference point is sometimes laughably obvious. We could argue whether there are signs they also form a world model and reasoning that abstracts from language alone, but that's not even my point. My point is rather that any test or argument that attempts to say that LLMs can't "reason" or "assume" or whatever has to be a test a human could pass. Preferably a test a random human would pass with flying colors.
For one thing, LLMs absolutely form responses from conceptual meanings. This has been demonstrated empirically multiple times now including again by anthropic only a few weeks ago. 'Language' is just the input and output, the first and last few layers of the model.
So okay, there exists some set of 'gibberish' tokens that will elicit meaningful responses from LLMs. How does your conclusion - "Therefore, LLMs don't understand" fit the bill here? Do you realize that you are also saying that humans have no understanding of what they see because of the Rorschach test ?
>There exists no similar set of tokens for humans, because our process is to parse the incoming sounds into words, use grammar to extract conceptual meaning from those words, and then shape a response from that conceptual meaning.
Grammar is useful fiction, an incomplete model of a demonstrably probabilistic process. We don't use 'grammar' to do anything.
If those two sets of accomplishments are the same there's no point arguing about differences in means or terms. Right now humans can build better LLMs but nobody has come up with an LLM that can build better LLMs.
Neural Networks as I understand them are universal function approximators.
In terms of text, that means they're trained to output what they believe to be the "most probably correct" sequence of text.
An LLM has no idea that it is "conversing", or "answering" -- it relates some series of symbolic inputs to another series of probabilistic symbolic outputs, aye?
The problem you've just described is a problem with humans as well. LLMs are assuming all the time. Maybe you would like to call it another word, but it is happening.
Also, LLMs absolutely 'plan' and 'aim for something' in the process of completing text.
https://www.anthropic.com/research/tracing-thoughts-language...
In every LLM thread someone chimes in with “it’s just a statistical token predictor”.
I feel this misses the point and I think it dismisses attention heads and transformers, and that’s what sits weird with me every time I see this kind of take.
There _is_ an assumption being made within the model at runtime. Assumption, confusion, uncertainty - one camp might argue that none of these exist in the LLM.
But doesn’t the implementation constantly make assumptions? And what even IS your definition of “assumption” that’s not being met here?
Edit: I guess my point, overall, is: what’s even the purpose of making this distinction anymore? It derails the discussion in a way that’s not insightful or productive.
> In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of intellectual tasks, including those that it is not specifically trained on. Does that mean that o3 and Gemini 2.5 are AGI? Given the definitional problems, I really don’t know, but I do think they can be credibly seen as a form of “Jagged AGI” - superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn’t.
Certainly jagged does not imply general
It seems to me the bar for "AGI" has been lowered to measuring what tasks it can do rather than the traits we normally associate with general intelligence. People want it to be here so bad they nerf the requirements...
Re:”traits we associate with general intelligence”, I think the exact issue is that there is no scientific (ie specific*consistent) list of such traits. This is why Turing wrote his famous 1950 paper and invoked the Imitation Game; not to detail how one could test for a computer that’s really thinking(/truly general), but to show why that question isn’t necessary in the first place.
Certainly creativity is missing, it has no internal motivation, and it will answer the same simple question both right and wrong, depending on unknown factors. What if we reverse the framing from "it can do these tasks, therefore it must be..." to "it lacks these traits, therefore it is not yet..."
While I do not disagree that the LLMs have become advanced enough to a bunch of automation, I do not agree they are intelligent or actually thinking.
I'm with Yann Lecun when he says that we won't reach AGI until we move beyond transformers.
I'm guilty as charged of having looked at GPT 3.5 and having thought "it's meh", but more than anything this is showing that debating words rather than the underlying capabilities is an empty discussion.
Those are all different things with little to nothing to do with each other. It's like saying what if I ensemble a snake and cat ? What does that even mean ? GPT-N or whatever is a single model that can do many things, no ensembling required. That's the difference between it and a calculator or stockfish.
Huh? Isn't a LLM's capability fully constrained by the training data? Everything else is hallucinated.
The quality of the LLM then becomes how often it produces useful information. That score has gone up a lot in the past 18 months.
(Sometimes hallucinations are what you want: "Tell me a fun story about a dog learning calculus" is a valid prompt which mostly isn't meant to produce real facts about the world")
In that sense, they absolutely know things that aren’t in their training data. You’re correct about factual knowledge, tho — that’s why they’re not trained to optimize it! A database(/pagerank?) solves that problem already.
> What weighs more; a pound of courage or a pound coin?
> They weigh the same—each is a “pound.” A pound of courage and a pound coin both come out to one pound in weight!
https://chatgpt.com/share/68053ad0-aae4-800d-95ff-5b5b0a9a38...
I didn't know what AGI is, but I know this isn't it.
:::thinking This question is playing with words and units.
A "pound of courage" refers to a metaphorical measurement of the abstract concept of courage, using the imperial weight unit "pound" (which equals 453.59 grams).
A "pound coin" refers to the physical currency of the United Kingdom, which is called a "pound" but is a unit of currency, not weight. A physical pound coin weighs about 8-9 grams (specifically, the current 12-sided pound coin weighs 8.75 grams).
So comparing them: - A pound of courage would weigh 453.59 grams (if courage had physical weight, which it doesn't since it's an abstract concept) - A pound coin weighs approximately 8.75 grams
The trick in this question is that it's comparing: 1. A specific weight of an abstract concept (pound of courage) 2. A physical coin whose name includes the word "pound" but refers to currency, not weight
Since the question asks what weighs more, and weight is a physical property, I should point out that courage is abstract and doesn't have physical weight, while a pound coin is a physical object with a measurable weight. ::: # Comparing "A Pound of Courage" vs "A Pound Coin"
This is a play on words with different meanings of "pound":
- A *pound of courage* refers to a metaphorical pound (weight unit) of an abstract concept - A *pound coin* is a physical British currency coin (which weighs about 8.75 grams)
If we're talking about physical weight: - A pound of courage doesn't have physical weight since courage is abstract - A pound coin has a definite weight of approximately 8.75 grams
If we interpreted "pound of courage" as literally one pound (453.59 grams) of something, it would weigh significantly more than the coin.
This question is clever because it compares a metaphorical measurement with a physical object whose name includes the word "pound" but in a different context.
Until those capabilities are expanded for model self-improvement -- including being able to adapt its own infrastructure, code, storage, etc. -- then I think AGI/ASI are yet to be realized. My POV is SkyNet, Traveler's "The Director", Person of Interest's "The Machine" and "Samaritan." The ability to target a potentially inscrutable goal along with the self-agency to direct itself towards that is true "AGI" in my book. We have a lot of components that we can reason are necessary, but it is unclear to me that we get there in the next few months.
> A young boy who has been in a car accident is rushed to the emergency room. Upon seeing him, the surgeon says, "I can operate on this boy!" How is this possible?
But it didn't!
(o4-mini high thought for 52 seconds and even cheated and looked up the answer on Hacker News: https://chatgpt.com/share/68053c9a-51c0-8006-a7fc-75edb734c2...)
It’s bad luck for those of us who want to talk about how good or bad they are in general. Summary statistics aren’t going to tell us much more than a reasonable guess as to whether a new model is worth trying on a task we actually care about.
"An AGI is a human-created system that demonstrates iteratively improving its own conceptual design without further human assistance".
Note that a "conceptual design" here does not include tweaking weights within an already-externally-established formula.
My reasoning is thus:
1. A system that is only capable of acting with human assistance cannot have its own intelligence disentangled from the humans'
2. A system that is only intelligent enough to solve problems that somehow exclude problems with itself is not "generally" intelligent
3. A system that can only generate a single round of improvements to its own designs has not demonstrated improvements to those designs, as if iteration N+1 were truly superior to iteration N, it would be able to produce iteration N+2
4. A system that is not capable of changing its own design is incapable of iterative improvement, as there is a maximum efficacy within any single framework
5. A system that could improve itself in theory and fails to do so in practice has not demonstrated intelligence
It's pretty clear that no current-day system has hit this milestone; if some program had, there would no longer be a need for continued investment in algorithms design (or computer science, or most of humanity...).
A program that randomly mutates its own code could self-improve in theory but fails to do so in practice.
I don't think these goalposts have moved in the past or need to move in the future. This is what it takes to cause the singularity. The movement recently has been people trying to sell something less than this as an AGI.
It is not a simple matter of patching the rough edges. We are fundamentally not using an architecture that is capable of intelligence.
Personally the first time I tried deep research on a real topic it was disastrously incorrect on a key point.
If you ask an intelligent being the same question they may occasionally change the precise words they use but their answer will be the same over and over.
Heck, I can't even get LLMs to be consistent about *their own capabilities*.
Bias disclaimer: I work at Google, but not on Gemini. If I ask Gemini to produce an SVG file, it will sometimes do so and sometimes say "sorry, I can't, I can only produce raster images". I cannot deterministically produce either behavior - it truly seems to vary randomly.
What does that even mean? Do you actually have any particular numeric test of intelligence that's somehow better than all the others?