Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Thursday, June 30, 2022

It Is Hard Not To Agree That We Need Better And Smarter Regulation Of AI Than Exists At Present!

This appeared last week:

6:00am, Jun 20, 2022 Updated: 6:44pm, Jun 19

Alan Kohler: Sentient or not, AI needs regulating

Alan Kohler

In the 2001 film, AI Artificial Intelligence, Professor Hobby (William Hurt) says lovingly to the AI robot he created: “You are a real boy, David.”

Life imitates art: Last week Google put an engineer on paid leave after he published the transcript of an interview with an artificial intelligence chatbot called LaMDA, claiming that it is sentient, about the level of a seven-year-old child.

Blake Lemoine, the engineer in question, appears to have decided he’s Professor Hobby, and LaMDA is his David.

There followed a spirited global debate about whether AI can be sentient and make its own decisions – can experience feelings and emotions, and go beyond being an intelligent processor of data.
Which was very interesting, with echoes of Rene Descartes and the Enlightenment, but it missed a serious 21st-century point.

Blake Lemoine was stood down because he violated Google’s confidentiality policy. That is, it was meant to be a secret.

Google also asserted that its systems merely imitated conversational exchanges and could appear to discuss various topics, but “did not have consciousness”.

But as the Dude perceptively remarked in another movie, The Big Lebowski: “Well, you know, that’s just, like, your opinion man.”

Feelings are beside the point

We need to bear in mind that Google’s business model, and that of Amazon, Netflix, Facebook, Alibaba, Twitter, TikTok, Instagram, Spotify and a growing number of other businesses, is based on algorithms that watch what we do and predict what we’re likely to do in future and, more to the point, what we’d like to do, whether we know it or not.

The question of whether these algorithms have feelings is beside the point.

The business of manipulating our behaviour is unregulated because AI snuck up on governments and regulators – Google, Facebook and the others kept quiet about what they were doing until they had done it, and apparently they want to remain quiet about it, disciplining staff who blab.

Moreover, they now argue that the algorithms are not commercial products, or data, but speech and/or opinion, so they are protected by the constitutional protection of free speech in America and elsewhere.

If the output of the algorithms is “opinion” then the companies are shielded from all sorts of regulatory interference, including anti-competitive practices, libel and defamation, and accusations that they are, in fact, manipulating their customers. They are simply expressing opinions.

And naturally, the companies are not standing still – no business does. Huge resources are going into pushing the algorithms’ boundaries, making them smarter and better at manipulating us.

As part of that, last week Google’s Emma Haruka Iwao set a new world record by calculating Pi to 100 trillion digits, beating the previous record of 62.8 trillion. It took 157 days and required 128 vCPUs, 864GB of RAM, and 515 terabytes of storage.

Materialism versus enlightenment

Most of the people engaged in this week’s debate about AI sentience appear to be saying that it can never happen, that the machines just process data, but 50 years ago the idea of having any kind of conversation with a chatbot or having your travel preferences anticipated by an algorithm would have seemed like science fiction as well.

Those who say AI can never be sentient look a bit like Cartesian dualists, stuck in the 17th century.

Rene Descartes believed that the mind and body are two different things and that consciousness is not physical. “I think, therefore I am”, as he put it, which became one of the foundation slogans of philosophy.

Anthony Gottlieb wrote in The Dream of Enlightenment: “Descartes’ reasons for believing that his soul or self must be something non-material were as follows. I cannot doubt that I exist. But I can doubt that I have a body. Therefore, I am something separate from my body.”

Descartes went further, saying that it also means that God exists and “that every single moment of my entire existence depends on him”.

But as philosophy has moved on from Descartes and the early Enlightenment, and religious authorities are no longer hovering over the output of scientists and philosophers, the rise of materialism has challenged Descartes’ dualism. In recent times a more reductionist approach to consciousness has arisen.

……

Even if AI sentience doesn’t develop, AI is getting smarter, and big tech shouldn’t be allowed to keep going unobserved and unregulated.

More here:

https://thenewdaily.com.au/opinion/2022/06/20/alan-kohler-ai-needs-regulating/

I really believe too much control of the evolution of AI is in the corporate sector where the profit mostive rather than the public good is the major motive and the level of transparency of what it actually happening is far from perfect.

So many technologies get away from regulators and politicians for too long and we only see control re-emerge – if at all – way too late. I reckon AI is an area where we need to pro-actively grab control and make sure things move in directions that support the public interest rather than damaging it!

The last paragraph says is all I believe!

David.

No comments: