Friday, July 26, 2024

I Really Wonder If The Author Is Not Being Too Cautious?

This appeared a few days ago”

Here’s a brain teaser: Will AI ever reach consciousness?

Paul Davies

Physicist, writer

July 20, 2024 — 5.00am

Consciousness is the most basic aspect of human experience, and yet it remains deeply mysterious. Centuries of deliberation by philosophers, theologians and scientists have made little headway in explaining its origin.

Now there is a new twist to this age-old conundrum. The rapid advance in artificial intelligence, or AI, has reignited the question of whether a machine can truly think and feel and possess a sense of self, or whether there is some special form of magic going on inside biological brains that isn’t captured by the world’s most powerful computer systems.

There is no doubt that products such as ChatGPT are dazzling in their capabilities, such as writing essays and even producing computer code. Superficially, they seem to “know what they are doing”. But that is an illusion: these systems are trained to trawl vast amounts of data and organise it in humanly useful ways. That doesn’t make them conscious, though.

Speculation about AI consciousness is based on the popular misconception that the human brain is a type of digital supercomputer. To be sure, both brains and computers process information with extraordinary efficiency, but they do so in very different ways. The pertinent question is not whether digital computers as we know them can be conscious, but whether we can build systems of some sort in the lab that mimic the way our brains work.

One idea is to use neurons as circuit components to perform basic information-processing tasks. Presumably, conscious experiences are generated by complex electrochemical patterns swirling inside our heads. Perhaps proto-consciousness can be conjured up in the lab by recreating such patterns?

Philosophers, following Thomas Nagel, often express the essence of consciousness as “what it is like” to be, say, a human or a bat or a bee. Well, is it “like” anything at all to be a jumble of wires, neurons and gel sloshing about in a dish? And how would such a conscious entity let us know the answer anyway?

Artificial mini-brains in a dish offer a bottom-up way to probe consciousness. But there is also a top-down approach, which is to augment human brains with hi-tech electronic gizmos. Elon Musk’s Neurolink company claims to have implanted microscopic needles into a human subject to enable a person to operate external smart devices by the power of thought alone. It could be the first step on the way to merging brains and supercomputers.

Eventually, electronic augmentation of brains may noticeably affect “what it is like” to be human. Unlike humble dish-brains, these “transhumans” could report on their enhanced subjective experiences.

Any attempt to unravel, mimic or enhance consciousness is an ethical minefield. If an artificial system is conscious, does it have rights and responsibilities, emotions, a sense of free will? Would humans have the right to switch off or kill such a sentient being if we felt threatened?

Running through these troubling questions is an assumption that consciousness isn’t an all-or-nothing phenomenon but comes in degrees. We feel far more remorse killing a dog than a cockroach because we suppose that a cockroach isn’t all that conscious anyway and probably has no sense of self or emotions.

But how can we be sure? And in the highly charged debates about abortion, euthanasia and locked-in syndrome, the actual level of consciousness is usually the critical criterion.

Without a theory of consciousness, however, it’s impossible to quantify it. Scientists haven’t a clue what exactly is the defining feature of neural activity that supports conscious experience.

Why do the electrical patterns in my head generate sentience and agency, whereas the electrical patterns in Ausgrid NSW don’t? (At least, I don’t think they do.) And we all accept that when we fall asleep, our consciousness is diminished, and may fade away completely.

A few years ago, the neuroscientist and sleep researcher Giulio Tononi at the University of Wisconsin proposed a mathematical theory of consciousness based on the way information flow is organised, roughly, the arrangement of feedback loops, which in theory enables a specific quantity of consciousness to be assigned to various physical states and systems.

Is a thermostat conscious? A dish-brain? ChatGPT? A lobster in a boiling pot? A month-old embryo? A “brain-dead” road accident victim? These vexatious examples might be easier to confront, and to legislate about, if we really understood the physical basis of consciousness.

The foregoing advances, while promising, tell us little about the subjective experiences that attend conscious events, such as the redness of red, the sound of a bell, or the roughness of sandpaper, sensations that philosophers call qualia.

How can we tell if an agent really has an inner life experiencing such qualia, or is just an automaton, a zombie, programmed to respond appropriately to sensory input, for example, by stopping at a red traffic light without actually “seeing red”?

And if one cannot tell from the outside what is going on inside, why does this inner subjective realm exist in the first place? What advantage does it confer in that great genetic lottery called Darwinian evolution? Even if we create a truly conscious AI, that final problem may lay forever beyond our ken.

Paul Davies is Regents’ Professor of Physics at Arizona State University and author of over 30 books, including The Demon in the Machine. He will be speaking at the Sydney Opera House as part of the Your Brain on AI event on August 17. Tickets: sydneyoperahouse.com

Here is the link:

https://www.smh.com.au/technology/here-s-a-brain-teaser-will-ai-ever-reach-consciousness-20240718-p5jutz.html

I have to say I think that we will see effective machine conscious in the next 20-30 years and that there will be pretty parallel development of machine intelligence over a like period. No proof but it is hard to imagine it will take much longer than that.

What is really above my pay grade are the implications of such progress and how society at large will respond to such a reality.

What do you think – will generally smart machine emerge in the next decade or two and what will it mean for us humans?

David.

No comments:

Post a Comment