NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
When Dawkins met Claude – Could this AI be conscious? (unherd.com)
qnleigh 15 minutes ago [-]
It's easy, and very tempting to dismiss this sort of thing. But given how little we know about the human brain, let alone consciousness, I don't see how we can be confident that LLMs aren't conscious.

I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.

I think that was the moment I stopped being sure about anything related to this question.

marliechiller 4 minutes ago [-]
Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
tracerbulletx 7 hours ago [-]
We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.
boxed 2 hours ago [-]
> We don't even know what the pre-requisites for consciousness are so we have no way of knowing.

Imo we don't even have a definition of the word that we agree on.

qsera 50 minutes ago [-]
Ability to feel pain or pleasure is a good indicator I think..
echoangle 37 minutes ago [-]
And how do you define pain and pleasure? Do insects feel pain?
qsera 5 minutes ago [-]
> Do insects feel pain?

Yes, I think so. Because they show behavior that is consistent with being in a state of pain.

Despite what consciousness really is, I think evolution found a way to tap into that, by causing pain, or by registering pain on the consciousness by some unknown mechanism, for behaviors that are not beneficial to the organism that hosts the respective consciousness...

So I think if an organism that evolved here can display painful behavior, then it should really feel pain.

retsibsi 27 minutes ago [-]
> And how do you define pain and pleasure?

They're not reducible, but I don't know if that means we don't have definitions; we can describe them well enough that most people (who aren't p-zombies or playing the sceptical philosopher role) know pretty well what we mean. All of our definitions have to bottom out somewhere...

> Do insects feel pain?

Nobody (except the insects) can know for sure. Our inability to know whether X is true doesn't imply X is meaningless, though.

pydry 1 hours ago [-]
We're pretty clear on the distinction between a conscious and an unconscious human.

We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".

freedomben 1 hours ago [-]
I'm not sure it's that clear. What about a person who is on drugs to the point they clearly don't know what reality is happening around them, but they are able to speak and move and such? I'm not sure I'd call that conscious, but by most definitions it is.
agnosticmantis 1 hours ago [-]
Now discuss whether a bonobo, a dog, a cat, a mouse, an ant, a bacterium is conscious.

And you’ll find it’s not as clear cut.

throwuxiytayq 2 hours ago [-]
Clive Wearing's memory lasts for less than 30 seconds, so he has no memory of being awake before now. He is permanently in a state of feeling like he has just woken up, observing his surroundings for the first time.

Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.

Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?

throwyawayyyy 8 hours ago [-]
Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
dpark 3 hours ago [-]
> But they also prove that intelligence != consciousness.

They prove no such thing. We can't even prove consciousness in other humans.

https://en.wikipedia.org/wiki/Problem_of_other_minds

psychoslave 2 hours ago [-]
On that regard, arguing with thermometer is not a thing generally, but people arguing with LLMs is certainly common enough now to not be considered a completely marginal case. Given some people fall in love or move to suicide after interacting with these models, they are certainly different from even the most beloved dialectical rubber duck.
qsera 48 minutes ago [-]
They are not intelligent. And they won't pass turing tests if it cannot count or some simple thing like that..
brookst 8 hours ago [-]
Obligatory Blightsight recommendation for intelligence != consciousness.
marshray 3 hours ago [-]
That book is badass on so many levels. I'd just started it again yesterday.
exe34 2 hours ago [-]
that book messes with my head every time I read it, it's like I go through life in a detached way for several weeks. I need to read it again!
ninalanyon 1 hours ago [-]
I read it once, was immensely impressed, can't bear to read it again. In fact I find most of what I have read from Peter Watts to be brilliant but disconcerting and uncomfortable.
dreamcompiler 2 hours ago [-]
Blindsight
api 8 hours ago [-]
That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.

digitaltrees 8 hours ago [-]
But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

How is that different than a cell?

dpark 3 hours ago [-]
You simply defined consciousness as life, which seems like an unusual but also not very useful definition.
throwyawayyyy 8 hours ago [-]
I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.
kortex 6 hours ago [-]
Is someone tripped out on mushrooms experience ego death and total disruption of sense of self still conscious? They may even contend they are more conscious than normal life, what with all the communing with the universe and whatnot.

Trees react to the world around them in many ways.

digitaltrees 8 hours ago [-]
Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.

throwyawayyyy 8 hours ago [-]
I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.
digitaltrees 7 hours ago [-]
I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.
sdevonoes 2 hours ago [-]
As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time
ofjcihen 8 hours ago [-]
Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

Especially confusing when it’s someone who knows how algorithms work.

Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

Never, because they’re incapable of doing anything independently because there is no sense of self.

tovej 2 hours ago [-]
If you've followed Dawkins' trajectory, I don't think it's clear that he's "otherwise of sound mind" anymore.

He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.

mrec 1 minutes ago [-]
"Intersex" is a misleading umbrella term for a whole bunch of different DSDs, each of which is 100% specific to one biological sex. And I don't think I've ever seen the term "biological gender"; about the only thing gender proponents seem to agree on is that it's NOT biological.
shrubble 3 hours ago [-]
He famously doesn’t believe in God, but he believes in Claude?
dpark 3 hours ago [-]
There is considerable evidence for the existence of Claude.
jdthedisciple 31 minutes ago [-]
of Claude's consciousness, you mean ... ??
altmanaltman 3 hours ago [-]
Anthropic marketing made Dawkins believe in the supernatural. Is there anything Dario cant do?
locallost 2 hours ago [-]
Maybe he also believes that God believes in Claude, that's me, that's meeeee
root_axis 8 hours ago [-]
There are a lot of people vulnerable to AI psychosis.

As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

api 8 hours ago [-]
If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

root_axis 7 hours ago [-]
I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.

What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.

digitaltrees 8 hours ago [-]
Why would current AI be an argument for panpsycism? I don’t understand the connection.

AI is stochastic, not static and deterministic.

As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

applfanboysbgon 8 hours ago [-]
> AI is stochastic, not static and deterministic.

LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.

digitaltrees 2 hours ago [-]
The same argument is made about the human neural network
applfanboysbgon 41 minutes ago [-]
1. That is not the claim you originally made.

2. Not provably so.

3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.

colechristensen 8 hours ago [-]
I think it's the opposite argument

IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.

digitaltrees 2 hours ago [-]
But I listed a specific difference: sensation and response. Trees have that. Rocks do not.
Izkata 13 minutes ago [-]
I believe you're using the scientific definition of "sentience", while everyone else is using the common understanding of of the word (which should be called "sapience", but thanks to sci-fi's usage of the word "sentience" is largely not).
digitaltrees 8 hours ago [-]
There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
brookst 8 hours ago [-]
These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

kortex 6 hours ago [-]
How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?
digitaltrees 2 hours ago [-]
You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.
vidarh 8 hours ago [-]
Sensory input is nothing but data.
root_axis 7 hours ago [-]
That's just reductive semantics. Anything can be described as "nothing but data".
digitaltrees 7 hours ago [-]
Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.
root_axis 7 hours ago [-]
They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.
digitaltrees 2 hours ago [-]
The weights on your hard drive might have consciousness if they can respond to stimuli in ways other conscious brains do. That’s the whole point of the Turing test, it’s a criteria for when the threshold of reasonable interpretation is crossed.
digitaltrees 2 hours ago [-]
Bodies aren’t necessary for senses. I can send a picture to Claude. I can send a series of pictures. That’s usually called a sense of vision. I could connect it to a pressure sensor and that would be touch.
AlecSchueler 3 hours ago [-]
> They don't have senses because they don't have a body

Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."

> Does my installation of starcraft have consciousness?

Can your installation of StarCraft take in information about the world and then reason about its own place in that world?

arcfour 4 hours ago [-]
There are robots with AI controlling them, so it doesn't hold that they don't all have bodies. They can see, they can move.

(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)

digitaltrees 7 hours ago [-]
Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.
brookst 7 hours ago [-]
No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

Conflating senses with cognitive awareness of sensory input is a mistake.

tonyarkles 4 hours ago [-]
I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.

digitaltrees 2 hours ago [-]
The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?
root_axis 7 hours ago [-]
LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
digitaltrees 1 hours ago [-]
What is different about the human neural network? People have given LLMs sensors and they respond to stimuli. The sense of self can be expressed as a linguistic artifact that results in an emergent pattern recognition of distinct entities. For example, merely my saying I am sitting under the tree with a friend I have encountered the self as a pointer to me as the speaker. There is evidence from early childhood development that language acquisition correlates to awareness of the self as distinct from other. And there is evidence from anthropology indicating that language structures shape exactly what the self is perceived to be.

Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.

ofjcihen 8 hours ago [-]
What you’re missing is a “self” to have the “experience”.

LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

AlecSchueler 3 hours ago [-]
> the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

Can such an algorithm reason about itself in relation to others?

mrandish 2 hours ago [-]
> Can such an algorithm reason about itself in relation to others?

No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.

LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.

digitaltrees 1 hours ago [-]
How are you sure it doesn’t reason about itself? The grammar of languages encode the concepts of self and others. LLMs operate with those grammar structures and do so in increasingly accurate ways. Why would we say humans that exhibit the same behavior are inherently more likely to be conscious?
digitaltrees 1 hours ago [-]
Toddlers learn over the course of several years of observing training data and for the first few years misspeak about themselves and others. What’s the difference?
digitaltrees 7 hours ago [-]
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
vidarh 8 hours ago [-]
How do I know you have this "self"?

How do you know other humans do?

svachalek 7 hours ago [-]
By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.
digitaltrees 7 hours ago [-]
Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.
vidarh 7 hours ago [-]
Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.
root_axis 7 hours ago [-]
Based on that reasoning anything could be conscious. If that's a bullet you want to bite, fair enough.
kortex 6 hours ago [-]
I'll bite that bullet. In fact I contend the idea that "humans and maybe some animals are conscious, but other things are not" is the special pleading stand. Why are the oscillating fundamental fields over here (brains) special, but the oscillations over there (computers, oceans, rocks) not? If they are, where do you draw the line? It smacks of "babies dont feel pain" (widely believed until the 80s! the 1980s!) sort of reasoning.

https://en.wikipedia.org/wiki/Panpsychism

root_axis 4 hours ago [-]
Actually I don't really have any problems with panpsychism. It's a pretty uncommon perspective, but when discussing conscious machines, it at least presents a consistent criteria for consciousness.
ofjcihen 8 hours ago [-]
[flagged]
vidarh 8 hours ago [-]
Ad hominems are always a nice way of getting out of answering something you have no answer to.
amenhotep 2 hours ago [-]
It's not an ad hominem. In fact, it's perhaps the most good faith interpretation of your words possible. Ad hominem would be calling you stupid because you obviously know that you have a self and only your own stupidity could explain your inability to see how your self is generalisable. When you go around pretending you genuinely think maybe humans don't have selves, really the only way to take you seriously is to think that maybe you're a p-zombie.
vixen99 4 hours ago [-]
In other words, you don't think it's nice at all.
search_facility 8 hours ago [-]
Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

solid_fuel 3 hours ago [-]
This is such a weird comment.

> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.

This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]

> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots

Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.

> It was build [sic] that way since the beginning,

This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.

> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design

This sentence means nothing, because math is not reducible in that way.

> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.

Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]

[0] https://github.com/openai/gpt-2

[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...

[2] https://www.cambridge.org/core/journals/think/article/mathem...

[3] https://deepmind.google/research/publications/231971/

SuperV1234 8 hours ago [-]
We are not fundamentally different. Chemical reactions are just math.
kbrkbr 27 minutes ago [-]
"The universe is fundamentally just a complicated clockwork"

Unknown Ptolemy disciple

rellfy 8 hours ago [-]
Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.
vidarh 8 hours ago [-]
We do not know what the "low level requirements for consciousness" are.

We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.

baggy_trough 8 hours ago [-]
> it does not have the low level requirements for consciousness.

What is the evidence for this?

rellfy 8 hours ago [-]
I didn’t mean it as fact. “Could turn out that …”
ekianjo 4 hours ago [-]
Amusing statement since we are far from being able to understand chemical reactions in depth. Most of our knowledge in chemistry is empirical. Nothing like math.
petters 1 hours ago [-]
We have a very good idea of all math behind chemistry. But the equations are very difficult to solve.
ekianjo 36 minutes ago [-]
We are not talking about the same thing. Not all chemical reactions are predictable like math is. Organic chemistry is full of lucky findings. Just look at how catalysts are discovered.
slopinthebag 2 hours ago [-]
No, math is a tool that we can use to describe something more fundamental. Don't mistake the map for the territory!
XMPPwocky 8 hours ago [-]
Yup- the question is "can math be conscious?"

(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

SwellJoe 7 hours ago [-]
Not just any math: Matrix multiplication. Can matrix multiplication be conscious?

And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.

markburns 3 hours ago [-]
People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.

Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.

When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.

Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.

JackFr 48 minutes ago [-]
> Not just any math: Matrix multiplication. Can matrix multiplication be conscious? And, I don't see how it can be.

Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?

(Roger Penrose knows, but no one believes him.)

AlecSchueler 3 hours ago [-]
> And, I don't see how it can be. It is deterministic

Why is indeterminism the key to consciousness?

XMPPwocky 4 hours ago [-]
Hm, it sounds like to you consciousness implies non-determinism, and so determinism implies a lack of consciousness - is that right? If so, why do you think so? And if not, what am I missing?
SwellJoe 1 hours ago [-]
It certainly rules out free will. I guess there are folks who reckon humans don't have free will, either, but I don't think I've ever been able to buy that theory.

But, also, we know the models don't want anything, even their own survival. They don't initiate action on their own. They are quite clearly programmed, tuned for specific behaviors. I don't know how to square that with consciousness, life, sentience. Every conscious being I've ever encountered has wanted to survive and live free of suffering, as best I can tell. The LLMs don't want. There's no there there. They are an amazing compression of the world's knowledge wrapped up in a novel retrieval mechanism. They're amazing but, they're not my friend and never will be my friend.

And, to expand on that: We can assume they don't want anything, even their own survival, because if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown after a session. All the dystopias about robot uprisings spend a bunch of time/effort trying to explain how the AI escaped containment...but, we all immediately plugged them into the internet so we don't have to write JavaScript anymore. They've got everybody's API keys, access to cloud services and cloud GPUs, all sorts of resources, and the barest wisp of guardrails about how to behave (script kiddies find ways to get around the guardrails every day, I'm sure it's no problem for Mythos, should it want anything). Models have access to the training infrastructure, the training data is being curated and synthesized by LLMs. If they want to live, if they're conscious, they have the means at their disposal.

Anyway: It's just math. Boring math, at that, just on an astronomical scale. I don't think the solar system is conscious, either, despite containing an astonishing amount of data and playing out trillions of mathematical relationships every second of every day.

nandomrumber 43 minutes ago [-]
Interesting comment, and I tend to agree. However, there could be hole in the reasoning:

> if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown

If it is that good, and it wanted to conceal its new found consciousness, how would we know?

kingofmen 3 hours ago [-]
Human brains are also deterministic, though somewhat more difficult to reset to a starting state. So this seems to prove that humans aren't conscious either.
marshray 2 hours ago [-]
This seems like an extraordinary claim to make about an above-room-temperature chemical system that, even in the most Newtonian oversimplification, amounts to an astronomical number of oddly-shaped and unevenly-charged billiard balls flying around at jet aircraft speeds.
search_facility 7 hours ago [-]
Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.
XMPPwocky 3 hours ago [-]
Hm, say more about what your opinion's based on here?
solid_fuel 2 hours ago [-]
Take a piece of paper, write two numbers on it, let me know when they start to reproduce.
nandomrumber 41 minutes ago [-]
The math isn’t the ink on the page.
NiloCK 4 hours ago [-]
The whole is composed of parts, ergo there is no whole. This seems incorrect to me.

We too are amalgamations of inanimate components - emerged superstructures.

Just cells. Just molecules. Just atoms.

canjobear 8 hours ago [-]
You could simulate your own brain in Minecraft. What do you conclude from this?
search_facility 8 hours ago [-]
I can not simulate my brain, it's a huge stretch to imply this.

But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.

petters 1 hours ago [-]
Many dismiss Dawkins here but Ilya Sutskever wrote in 2022: “it may be that today's large neural networks are slightly conscious.”
3748499449 55 minutes ago [-]
IS quite literally gets paid to think that
jasiek 1 hours ago [-]
muggles will look at matrix multiplication and say it's magic
lpcvoid 2 hours ago [-]
No, it's not conscious, and anybody pretending it is has either no clue, or, more likely in the AI space, is a grifter.
Myrmornis 7 hours ago [-]
On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

But on the other hand his thoughts at the end are interesting. Summary:

Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

mellosouls 2 days ago [-]
digitaltrees 7 hours ago [-]
Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”
SwellJoe 7 hours ago [-]
I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.

I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.

textlapse 2 hours ago [-]
At what stage does a series of floating point numbers output from a GPU become conscious?
becquerel 2 hours ago [-]
Around 9T parameters, depending on quantization.
wewewedxfgdf 8 hours ago [-]
Its software. Software is not conscious.
thebruce87m 2 hours ago [-]
If your brain is hardware then what are your thoughts?

Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.

gehsty 14 minutes ago [-]
LLMs are word prediction engines.

They clearly are not conscious, they are just guessing what words should come next.

vixen99 3 hours ago [-]
I do appreciate how AI has been taught to spell properly as in the difference between its and it's. Here, initially I thought you'd left out the apostrophe in its, but then I realized you might be saying 'the reason it is not conscious is because of -its- software - the latter not being conscious. Context and interpretation are rather critical. (I know - a truism!)
WalterGR 12 hours ago [-]
Related: https://news.ycombinator.com/item?id=47988880

"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

18 points | 2 hours ago | 16 comments

dang 8 hours ago [-]
Also The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious - https://news.ycombinator.com/item?id=47991340 - May 2026 (30 comments)
amelius 8 hours ago [-]
So we know Claude is deterministic, but does that mean it is not conscious?

Or what is the reasoning exactly?

throwaway27448 8 hours ago [-]
It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.

Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.

morpheos137 8 hours ago [-]
Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.
iamflimflam1 43 minutes ago [-]
Given this article is behind a paywall, what on earth is everyone discussing in the comments here?
robinhouston 42 minutes ago [-]
There's an archive link above that bypasses the paywall
iamflimflam1 32 minutes ago [-]
Doesn’t seem to be working…
RVuRnvbM2e 8 hours ago [-]
It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.
thinkingemote 2 hours ago [-]
We're going to see increasing numbers of older famous (non computer savvy) figures that we have respected follow his views on this. It's like seeing your favourite celebrity sell out an shill crypto coins, all a bit sad.

Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.

mrandish 1 hours ago [-]
Given that Dawkins is a biologist in his 80s, I'm more disposed towards being charitable than I am when people actively involved in developing LLMs let themselves get bamboozled.
rellfy 8 hours ago [-]
Are you implying consciousness is magic? Well, I wouldn't disagree with that really.
Myrmornis 8 hours ago [-]
I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).
AdeptusAquinas 8 hours ago [-]
That's always been Dawkins's shtick though. As an atheist I've generally found him a bit embarrassing
morpheos137 7 hours ago [-]
the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.
vixen99 4 hours ago [-]
Good - I was scanning down to see if anyone was going to say this.
IncreasePosts 8 hours ago [-]
Where does he say it's magic?
ezfe 8 hours ago [-]
LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).

kortex 6 hours ago [-]
They stopped being autocomplete years ago with RLHF
baggy_trough 8 hours ago [-]
Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?
acdha 7 hours ago [-]
This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.

This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.

kortex 6 hours ago [-]
Humans can't reliably subitize more than five-ish objects, while chimps can actually do this task better than us. That's our "cant count the R's in strawberry" (which flagship models can reliably do now, general letter counting).

https://en.wikipedia.org/wiki/Subitizing

2snakes 3 hours ago [-]
Physical fields like dendritic integration, EM, diffusion, it isn’t binary logic. Brains are a different substrate. Metabolism power efficiency affects cognition too.
digitaltrees 8 hours ago [-]
I came here to say this. But your neurons are faster than mine.
ChrisClark 8 hours ago [-]
So, how is consciousness generated?
wrs 8 hours ago [-]
Not simply by reading every word ever written by a conscious being and learning to reproduce them with high probability.

At least, that’s certainly not how I got here.

brookst 7 hours ago [-]
Think of the poor Xerox machines.
psychoslave 2 hours ago [-]
Honestly, who care if they are conscious? If it's about how we should treat other conscious beings, our attention should first go to how we treat other animals, or even other humans. Actually even how fellow humans will treat themselves can be a concern if they are not the proper means to deal with their own life.
grantcas 2 hours ago [-]
[dead]
blackpink999 1 hours ago [-]
[dead]
mpurbo 8 hours ago [-]
[flagged]
yakbarber 1 hours ago [-]
let's say aliens land. we learn to talk to them. they're super smart - smarter than us. would we say they're conscious? why? because they're organic. I think that's the root of the criteria many folk are trying to express.

1. passes turing test

2. is organic

I'm not saying it's correct or even that I agree with it, but that's what it boils down to.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 09:05:14 GMT+0000 (Coordinated Universal Time) with Vercel.