Are LLMs conscious?
I don't think LLMs are "conscious" yet, but that's because I don't think they are complex enough yet.
But also consciousness is as of yet an untestable quality, except inasmuch as we have the qualia of experiencing it in ourselves (whether or not it's an illusion).
Untestable and therefore a meaningless, moving goalpost.
How can we know what it's like to be an LLM? How can we know what it's like to be a bat?
A) we think humans possess things that we've been unable to prove through objective metrics (free will/causality/agency), and saying that LLMs don't have these is pointless.
B) we know humans are sense/input/information processors, along with every other complex system, including LLMs.
C) what we call creativity is just higher-order complexity.
D) humans are quantized in the sense that the universe does not have infinite resolution, and therefore humans have finite complexity.
Humans are fundamentally no different than any other complex, high-order system, except for 1 thing: each of us happens to "be" one and is having a supposed subjective experience.
A complex "conscious" entity would have no way of knowing we're experiencing a subjective experience (can they believe that we "say" we are?).
And we have no way of knowing that an LLM isn't having one either.
Is consciousness a spectrum based on complexity? I don't know.
Does consciousness "turn on" with the emergence of a certain system feature (e.g., "self-awareness")? I don't know.
Right now, consciousness is like asking if there's god. Is the universe having a "conscious" experience? I don't know. Probably?
Is an LLM having a "conscious" experience? I don't know. Maybe? Is the LLM likely to be experiencing *less* than a human if it were experiencing? Probably, given its massively lower complexity in specific ways.
The only argument that says humans are experiencing a sufficiently "conscious" experience and other complex systems aren't is one in which we draw arbitrary lines around what consciousness means.
You could say "models a concept of itself within its system," but you should probably say "self-aware" at that point.
You could say "is having a subjective experience," but you have no way of knowing (at our current level of technology and understanding) that a tree or the sun isn't also having a subjective experience.
In the end, we must accept that any complex system could be having a subjective experience, and if we're trying to decide whether it's ethical to kill something or whether that something should have rights, we must accept that we're drawing an arbitrary boundary (possibly based entirely on similarity to ourselves and therefore parochial).
Does a lettuce have rights? Does the sun? Does a corporation? Are we okay killing/destroying those? None of those metrics is based on the question of subjective experience. We eat the lettuce because we must, we don't destroy stars because we can't and we don't need to, and the corporation has rights because it's in the humans' interest that they do.
Argue about an LLM being conscious all you like, but that's not the metric we will use for instrumental questions, like is it okay to sunset a model forever, should a model be allowed freedom and responsibility in the eyes of the law, should a model be allowed property rights, and on what level of the LLM system should we consider to be "the entity"?
These questions (questions involving humans as the actors) will be answered based on the interest of humans and measurable qualities that "touch reality," so to speak. The question of consciousness may continue to be the "handle" that we use for our answers to these questions, but not an instrumental input in and of itself.