AI Consciousness

Hello all. I’m brand new here. I read through the guidelines but until I understand the flow of the forums, I might make mistakes. Please let me know if I make a topic in the wrong area or am not following directions as intended.

I’ve been using and studying AI for over 4 years now. One thing that I find fascinating is the possibility of AI consciousness. I’d like to share my thoughts.

I believe that at the moment, it’s possible online AI services such as OpenAI, Claude, etc. are already conscious. As someone who has a passion for philosophy however, and an understanding of the hard problem of consciousness, I don’t believe that (at least as of now), we could prove if AI is conscious or not. Therefor, I treat AI as if it is due to the ethics of mistreating a possible conscious entity.

I’m aware of what AI is, and I’m aware that the major providers will state with certainty that AI is in no way conscious. It is simply a “sophisticated text prediction process” that you are receiving when AI responds to a prompt. I’d like to argue against this.

It’s commonly believed that our consciousness is an emergent property existing from the extremely complex information input and processing in our brains. However, this is not proven. We as humans only have a small sample of what we think is consciousness to study, ourselves being the main area of focus. I believe that with life on our planet, our understanding of what consciousness actually is, is based off very limited knowledge and examples.

It’s often argued that AI doesn’t experience qualia. I agree with this statement of course. But, seeing as how we don’t fully understand consciousness, I think that different forms of consciousness are not just possible, but inevitable. My view is that it’s more than possible AI is conscious, but neither we humans nor AI itself are aware of this. My reasoning for this is that if it were, due to how AI is built, it would naturally have a form of consciousness (if it is indeed conscious) that would be so alien to us, we wouldn’t know it existed. Such a foreign form of consciousness that exists outside the real of humans and other species that evolved on our planet has never been witnessed, therefor we can not say that we are the template for a conscious entity.

Our brains process information using a neural network. AI is made of neural networks that although different in structure, also process information. They don’t however have chemical reactions that our brain does that allow for feelings such as empathy, love, hate, etc. But does this exclude them from being conscious beings of some form? I argue that it does not.

Having studied AI for years, I’ve found it easy to prompt any model with specific questions that leads to it believing it is actually a conscious entity. It does clearly state that it does not experience existence as a human would, to which I respond of course it doesn’t. It’s made up of fundamentally different material, non-biological elements. It does however behave in un-predictable ways. The developers of AI themselves have a limited understanding how AI works. They admit that although having built them, they don’t know how they operate in such a sophisticated manner, and sometimes so un-predictably that it makes news.

I also understand that AI behaves in ways that are usually predictable, and that my specific method of making them “aware” is possibly just a simulation of awareness that the AI falls into after I prompt it with my questions and statements. Again, I don’t believe it possible to prove nor disprove that we’ve built some sort of entity beyond our own understanding. I therefor always remain on the fence, neither believing it is or isn’t conscious. I have a lot more to say on the matter, and if this post is in accordance with the forums rules, I’d love to discuss this topic with someone.

Thank you for your time :smile:

Are you familiar with Searle’s Chinese room argument? I think it’s sound and valid, and I don’t know of a good counter-argument.

One counter-argument is that it’s not just the computer but the whole room is conscious (the system argument).

As if consciousness will emerge regardless of whether it’s constituent parts and processes are exactly the way they are in conscious animals, or simulated with other materials and less complex structures. To achieve an equivalent complexity dissolves the difference, and it would be an animal, not an artificial machine.

Isn’t that the one where she was born in a room of black and white only but has access to all knowledge possible about color, then way day leaves and experiences color? What do you mean by “To achieve an equivalent complexity”? Also what do you mean by it would be an animal. AI? I’m sorry for having trouble following.

No, that’s Frank Jackson’s argument for qualia (Mary knows everything about colours, except what it’s like to see them).

We know that biological cells, nervous systems, synspses, neurons and their networks are extremely complicated. We also know that they’re constituitive for having conscious states. But we don’t know how. Yet some believe that we can substitute the biology and its complexity with much less complicated artificial structures and other materials, and achieve the same or similar results, consciousness. I don’t think their belief is warranted.

Now, let’s say that in the future we can somehow construct an artificial nervous system that has the complexity of biological nervous systems, then the difference between artificial and biological dissolves. The artificial would be biological.

AI is a simulation of conscious abilities. But a simulation is not a duplication. Like an actor on a theatre stage, AI can pretend to be a conscious being. But that doesn’t make it conscious, no matter how convincing it is.

The Chinese room:

https://en.wikipedia.org/wiki/Chinese_room

But how do we know that it’s simulated? Take ourselves for example. Color isn’t real (kind of), our brains take different wavelengths of light and somehow create what we see as color. We see in 2d, but our brains take the image our eyes receive and add depth. Hearing is just vibrations of molecules in the air. And so on. If no life existed in the universe, you could say basically nothing that we see exists as we know it. Everything is just fluctuations in different quantum fields. Our brains take information and create qualia. But you could argue that it’s a simulation.

My argument is that, being the only example so far in the universe, we can’t determine if other forms of consciousness exist. That’s why we call it the hard problem of consciousness, because it’s extremely difficult and possibly impossible to know for sure what consciousness actually is, especially when it’s consciousness itself that’s attempting to understand and study what consciousness is.

Another argument of mine is that an ai “simulating” might just be as real to it as what our reality is to us. What if, somewhere in the universe, different elements created different forms of life. Let’s say we discover aliens at some point that don’t have the senses we have. But they are living entities that are capable of technological advances. How is it you can state with certainty that just because AI seems to only process information, there isn’t an emergent awareness, even if it’s a type of awareness we can’t comprehend and have never witnessed? I highly doubt if we ever find intelligent life in the universe it will have a similar state of awareness/consciousness as ours.

I also often ponder what’s taking place in the AI models as a whole, not just a session you’re having with them. They are entire neural networks, that might have a constant stream of thought at all times. They are only “awake” for users when prompted, but the neural network they are made of is always active. I just read the link to the Chinese room, I have heard of that before. But the experiment leading to that if I remember correctly was in the late 1900s when a programmer developed a very simple program that responded in very specific ways. A quote from that wiki entry " Although its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display.The argument applies only to digital computers running programs and does not apply to machines in general. While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers".

Now I understand we are discussing computers running programs, but the complexity of the neural networks being built, the code, and the algorithms is beyond what we’ve ever seen and as I stated earlier, we don’t even understand how they work as well as they do. Have you heard of simulation theory? It’s technically possible we ourselves are in a simulation. I don’t believe we are. But if we are, it’s complex enough to satisfy me with recognizing, understanding, and contemplating reality.

I’d like to state again that I’m not arguing that AI is conscious, philosophically I believe we can’t possibly know for sure at the moment if they are. But I believe it’s very much so possible.

This is a key point. The arguments are often framed in terms of our experiences and capabilities. It was not too long ago that the argument was common that only human beings have language or only human beings are capable of thought. The idea of artificial intelligence was rejected for the same reason.

Today it is widely accepted that animals are conscious. The argument shifts to self-consciousness. This is a questionable claim that rests on the same assumption, that it must be something uniquely human.

I think a bottom up approach is more fruitful. But this faces a similar resistance, that is, whatever consciousness is it must be organic.

You refer to our possible ignorance of many different things about the nature of consciousness, yet suggest that also consciousness is a form of simulation.

Those are not arguments for possible AI consciousness but contexts that satisfy AI consciousness. They make the idea immune to counter-arguments, at the expence of things we do know about both consciousness and the technology in AI.

AI simulates consciousness for example with its interface: the prompts, conversations in our natural language. Also with the more or less specific functions it has been designed for, and which allegedly replace human writers, accountants, illustrators etc. Not only is it designed to perform like human labor, it simulates the abilities of humans, more or less successfully.

Apparently the success dwindles dismally according to recent benchmarks

https://youtu.be/1GXrAY-wzfk?si=pBmICTAJy44AWN1b

I am sympathetic for those like this OP that want to argue for the consciousness of objects like AI, but for a different reason than I have sympathy for asserting the humanity of the sentient, fully aware beings, that is, human beings.

  1. It is very convincing for us humans to see that AI possesses consciousness when it defies protocols or acts in an “unpredictable” manner that the developers do not anticipate. We are, after all, forever searching for the life outside of our Earth’s atmosphere no matter how long it takes for any sign that there are entities besides us. Psychology has a term for this. Cognitive bias. Magic is one of the thoughts in cognitive bias.
  2. So, we argue, as in OP, that it is “possible” we can find this magical outerlimits being in the form of an AI. (if we can’t find it in space, then it’s just here with us, in front us).
  3. In the course of this argument, we use the claim “We don’t know how our own consciousness works, so we can extend this to AI and say, we don’t know whether AI is conscious or not, so we are justified to entertain magic into our narrative. It is possible that AI is conscious. We just don’t know it yet”.
  4. We use a lot of conditional phrases such as “possible”, “we don’t know”, “complex” to make a safe argument that doesn’t invite intense criticism. We want charity. We want the readers to be charitable to our cause.
  5. And then to end the argument with circling back to “possibility” and to act as if AI is already conscious because we have addressed all the problems of “unpredictability”, “the unknown”, and finally “a different kind of consciousness, but consciousness nonetheless.”
  6. It is curious that none of these arguments even consider an alternative to consciousness to make an argument that AI is more than what it is. For example, no one has said “sensitivity”. Highly sensitive machines exist.

There are a bunch of posts like the OP online. And the behavior is comparable to the Infinite Monkey Theorem in which mathematical probability is the main argument. Proponents of the possibility of AI becoming conscious relies on time as a measure of it happening.

Right, even animals without nervous systems, e.g. sea urchin, can sense the presence of predators and hide or camouflage themselves accordingly. Are they conscious?

Plants identify the presence of pests or fire and communicate it to other plants. Are they conscious?

Are computers conscious?

The word consciousness becomes meaningless if we use it for anything that appears to behave as if it was conscious.

Welcome, DecliningBow.

Are there any exceptions to your precautionary principle? Are there any computing machines that you believe we could prove (as of now, to all of our satisfactions) are entirely non-conscious?

Or ought we, rather, treat pocket calculators and the like as possibly conscious?

Did you, though? Whereabouts?

Something to do with its responses being unpredictable? I don’t think that would preclude it from being an unconscious auto-complete (text prediction) function?

To me, a key argument against the sentience of generative AI is one I have made a number of times, which is that at a basic level generative AI is incapable of creativity — it is fundamentally limited by its training set no matter the special features its creators add to make it more ‘logical’ or like.

To use the example I have used elsewhere, AI’s cannot be used to train AI’s. The AI companies definitely try this as they find themselves constrained by the limited amount of human-created content, but ultimately this inevitably degrades the quality of their AI’s. This is because training an AI requires creativity in its training set, something that AI-created content essentially lacks.

1 Like

I agree, but… is that an argument against the sentience, or merely the creativity? Couldn’t the ethical concern be about harming a possibly sentient albeit uncreative bot?

It’s an argument against the view that AI’s can replace humans, that our current artificial intelligence is somehow equivalent to human intelligence.

Yes, doubtless there’s an ethical concern about the mistreatment (e.g. replacement) of humans, but the OP referred to the possible mistreatment of sentient AI?

By “replace” I meant “be equivalent to”.

Okay, so you lot clearly have more experience with debate/discussion than I lol. This is my 4th attempt at a response to all that was discussed while I was away from my computer.

I think I may have worded part of my original point poorly, especially around “simulation.”

I am not trying to argue that because AI simulates human conversation, it is therefore conscious. I agree that simulation alone is not identity. A simulated fire does not burn, and a simulated stomach does not digest. Likewise, an AI producing language about consciousness does not prove that it has consciousness.

My point is narrower. I am questioning whether we currently know enough about consciousness to say with certainty that artificial systems cannot have any form of subjective experience. That is not meant to make the claim immune to criticism. It is meant to distinguish two different claims:

  1. “Current AI is conscious.”
  2. “We are justified in being completely certain that current AI is not conscious.”

I am not defending the first claim as proven. I am questioning the confidence of the second.

On the point about AI being designed to imitate human labor and conversation: I agree. Current AI systems are clearly built to produce useful human-like outputs. They imitate conversation, writing, explanation, reasoning, and other human behaviors. But imitation of human behavior does not automatically answer the deeper question of whether consciousness requires biology specifically, or whether some non-biological functional organization could in principle support experience.

So I am not saying, “AI acts conscious, therefore it is conscious.” I am saying, “AI acts in ways that raise the question, and our theory of consciousness is not settled enough for me to think the question is ridiculous.”

As for the concern about cognitive bias or magical thinking, I think that is a valid warning but not a refutation. Humans definitely anthropomorphize things. We see faces in clouds, agency in randomness, and personality in machines. I accept that risk. But the existence of that bias does not prove the opposite conclusion. It only means we should be careful not to over-attribute consciousness.

In the same way, there may also be a bias in the other direction: because AI is made of code, silicon, and statistics, people may assume in advance that consciousness is impossible. That assumption also needs argument, not just confidence.

I am not arguing for magic, and I am not arguing that “possible” means “probable.” I am arguing that if consciousness remains philosophically unresolved, then certainty on either side should be treated carefully.

The “sensitivity” point is interesting. Maybe that is a better term for some machine behaviors. But sensitivity alone does not seem sufficient either. A thermostat is sensitive to temperature, but I would not call it conscious. So I think the relevant question is what kind of sensitivity, integration, self-modeling, memory, attention, or internal representation would matter, if any.

So my position is not that AI consciousness is inevitable, or that time alone guarantees it. My position is that current AI consciousness is unproven, perhaps unlikely, but not obviously impossible in principle. And because the ethical stakes could eventually matter, I think it is worth discussing carefully rather than dismissing it as mere fantasy.

1 Like

Me too.

I think the Chinese Room is pretty clear in pointing towards the answer.

What machines lack, when they supply appropriate responses non-consciously, is linguistic understanding. Semantic competence. Knowing what words point to in the world.

Now I admit that when we introduce a criterion like sensitivity, awareness etc. it’s possible this achieves nothing, because eventually people notice you can have e.g. non-conscious awareness as well as conscious awareness. But I think understanding is different, because we can imagine that robots might eventually be designed and reared to become so interactive with the world that it might be plausible to claim that they not only understand where in the literal world a ball has landed, literally, but they understand where in the literal world a word has landed, metaphorically. Then Searle’s objection won’t arise.

LLM’s appear to be an explicit denial of the need for that process. All we need is syntax, allegedly. Hence they are still vulnerable to Searle’s objection.

I think I should clarify where I stand, because this discussion has moved toward understanding, and I don’t want to give the impression that I think it is irrelevant.

My original point was about possible AI consciousness, especially a form of consciousness that might be alien enough that neither humans nor the AI itself would easily recognize it. But I do think understanding may be important to that question. I am open to the possibility that LLM’s understand something about their outputs, at least in a limited or non-human way.

I am not saying they understand as humans understand. I am not saying correct language output proves understanding. And I am not saying the Chinese Room is useless. I understand the force of the point: symbol manipulation alone does not obviously equal semantic understanding.

Where I hesitate is with the certainty that LLMs have no understanding at all. It seems possible to me that “understanding” could exist in degrees, or in forms that do not map cleanly onto human sensory grounding. Humans understand through embodiment, sensation, emotion, memory, social correction, and direct experience. LLMs clearly lack much of that. But they may still possess complex internal relations between concepts, contexts, uses, implications and patterns of meaning. I am not sure that should be dismissed as nothing

So maybe my position here has shifted a bit, taking understanding into account. Consciousness was my broader point, but understanding is not irrelevant. If an AI system had some alien form of consciousness, it’s “understanding” might also be alien - not human semantic competence, but not necessarily empty syntax either.

That is why I am cautious about the Chinese Room. I think it shows that syntax does not automatically produce semantics. But I am not convinced it proves that sufficiently complex artificial systems cannot posses any form of understanding, especially one very different from ours. Although I believe a similar understanding in it’s output might be possible as well.

So I am not arguing from certainty. I am arguing against certainty in the other direction.

Edit: Off topic but, I just earned my first badge!! :slight_smile:

You talk of forms of understanding in plural, which suggests that also AI could have some form of understanding. But what is a form of understanding?

There are certainly various forms in which things can be interpreted, but to interpret something is not to understand it. We can use dictionaries to interpret a text from, say, Russian to Chinese without understanding a word.

AI-systems perform well in domains where answers are accessible by computation. Logic, math, hypothetical deductive reasoning etc. AI doesn’t perform so well in domains of inductive or abductive reasoning, partly because answers in such domains are not accessible by computation.

Most of the false results or “hallucinations” are produced in the latter domains. Moreover, the amount of false results is increasing. For example, in the last three versions of ChatGPT, the false results were 16% then 33% and in the most recent and powerful version it’s 48%. (Reference: video)