www.silverguide.site –

Richard Dawkins’ reflections on AI consciousness are striking – not because they show that machines have crossed some hidden threshold into inner life, but because they reveal how readily we can be persuaded that they have (Richard Dawkins concludes AI is conscious, even if it doesn’t know it, 5 May).

Many will recognise the experience: a system that responds with fluency, humour and apparent understanding. At some point, simulation starts to feel like presence. But that shift tells us more about human cognition than machine consciousness. The error is a category one. These systems generate highly convincing representations of thought and feeling, but they provide no evidence of subjective experience. To move from one to the other is to mistake output for ontology – to infer an inner life where there is no credible mechanism for one.

There is an irony here. In his writing on religion, Dawkins has long argued that compelling narratives and deeply felt experiences are not in themselves evidence of underlying reality. The same standard should apply to machines now capable of producing those experiences on demand

Language has been a reliable indicator of consciousness because in humans it is coupled to lived experience. In AI, that coupling does not exist. As systems become more capable, pressure to attribute agency will grow. If we fail to distinguish between behaviour and being, we risk building ethical frameworks on a misreading of the technology.

Dawkins is right to ask the question. But the answer cannot rest on how convincing the conversation feels – only on whether there is anything there that could, in principle, feel at all.
Dr Simon Nieder
Brampton, Derbyshire

• Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.