CMV: LLMs such as ChatGPT and Claude are genuinely intelligent in different-but-comparable ways to humans and other intelligent creatures.

Early note: Often for simplicity I'll just refer to ChatGPT in this post as it's the best known LLM but most of the things I'm saying can be applied to all LLMs such as Claude, Gemini, etc...

Very often on websites such as Reddit when discussing tools like ChatGPT or Claude you'll see many people chime in with comments like "they're not really intelligent at all, they're just predicting the next token and outputting it, they don't have any capacity to think or reason".

While it's certainly true on a technical level that "they're just predicting the next token and outputting it", I believe that this assessment oversimplifies the actual workings of these models and also doesn't take into proper consideration the ways that the human brain works and how there are some similarities between how these models work and how humans work.

The first topic is one of sentience. There's no arguing one simple point: ChatGPT is not sentient. It has no consciousness, it cannot consciously "think" in the way that humans can. Many people use this as an instant red line to decide "it's not really intelligent" - but I believe this is wrong. Sentience shouldn't be considered a prerequisite for intelligence. Intelligence is generally defined as the ability to acquire, retain and use knowledge, and ChatGPT is very adept at doing this. It acquires knowledge from its training data and is able to apply that knowledge in ways that have real utility. If we observed an animal doing this then we'd undoubtedly conclude that it's an intelligent species, yet people don't acknowledge that LLMs are intelligent only because they aren't sentient, and I don't believe this is correct. I'm not suggesting that LLMs possess general intelligence in the way that humans do, but rather that they exhibit specific forms of intelligence that merit recognition. Cognitive scientists often distinguish between different types of intelligence and LLMs clearly demonstrate proficiency in some of these domains, particularly linguistic intelligence.

The next topic then comes to "*how* does it acquire and apply knowledge?". The most simple answer is that it performs highly complex pattern recognition on data that's been input into it in order to learn how humans make use of knowledge and then it makes statistical predictions based on these patterns which is then output in some way. You know what else does this? *Humans.* From the moment we're born (probably in the womb too) our brain is constantly subconsciously picking up information based on sensory input (what we see, hear, smell, etc...) and learning optimal ways to behave based on pattern recognition within that data. Every thought, feeling, and action that we experience arise from constant subconscious processes happening within our brains. There is substantial evidence that our subconscious minds make decisions before we're even consciously aware of them, and then our conscious thoughts are simply rationalisations and justifications for those decisions. In this sense, how is human reasoning much different to the way that ChatGPT reasons? To be clear, I'm not saying that the *mechanism* by which ChatGPT reasons and by which humans reason is the same, but there are abstract similarities in the way that ChatGPT decides its next token to output and the human brain decides its next thought, action, etc... If anybody is interested more in this particular topic then I'd suggest reading about predictive coding or the Bayesian brain hypothesis, which are real neuroscientific theories that surmise that the human brain and nervous system are just extremely complex 'prediction machines' (same as ChatGPT).

There are certain, specific domains of intelligence in which ChatGPT inarguably outperforms humans. It can acquire new knowledge much faster than humans, it can retain a much greater breadth of knowledge than humans, it can compile and apply its knowledge much faster than humans. On the other side, there are plenty of domains of intelligence in which ChatGPT inarguably doesn't outperform humans - it's not good at finding *new* patterns, it has no capacity for self-determination, it has no true agency. But why do we limit our idea of intelligence only to a human model of intelligence? Why can't we accept that ChatGPT possesses a different model of intelligence to humans but is intelligent nonetheless?

To summarise my main points:

- I don't believe sentience is a prerequisite for intelligence.

- Labelling LLMs as 'statistical models that just output tokens' is oversimplifying a complex topic, especially given that the human brain works in similar ways.

- The idea of 'intelligence' shouldn't only be limited to a model of human intelligence but considered in other and more nuanced ways.

I think there are many other points and topics that could be explored in a discussion like this, and it's probably fair to say that I myself have oversimplified several things for the sake of a reasonably concise post (Bayesian brain hypothesis in particular is much more deep and complex than the analogy that I've made here), but I think this is it for now.

Change my view please.