Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Friday, October 18, 2024

I Guess This Is A Warning We Need To Take Seriously – Given Where It Comes From!

This appeared last week – Nobel Prize week

Physics Nobel winner Brian Hinton issues AI alert

Tom Whipple

9 September,2024

Sometimes on the morning when scientists receive the Nobel prize, they thank their colleagues. Occasionally they talk about the surprise of receiving a call from a Swedish number. Often they are simply overwhelmed. Geoffrey Hinton had other concerns.

Less than an hour after receiving the prize in physics, the British-Canadian AI pioneer warned that the technology he helped create could lead to the subjugation of humanity.

Professor Hinton, who was born in London and now works at the University of Toronto, shared the prize with John Hopfield, from Princeton. Together, 40 years ago, they applied techniques from physics to show how mach­ines could in a sense learn for themselves.

In doing so, they provided tools that helped in the development of AI systems. They also created technology that, Professor Hinton said, could be used for tremendous good but had ­significant dangers.

Machine learning works differently from conventional programming. Rather than relying on programs in which computers are given explicit instructions, it enables them to learn from examples in an analogous manner to humans.

Professor Hinton was honoured for his work on reapplying the equations of Ludwig Boltzmann, a 19th-century physicist. Boltzmann had been looking at a way to understand systems that involved lots of individual elements, such as molecules of gas.

Professor Hinton’s insight was to apply the principles to spotting patterns in data. He separately worked on the “back propagation” algorithms that power today’s AI systems.

Although the work was done in the early 1980s, it came to fruition only in the past 15 years, when the growth in data gave computers enough examples to work from and the growth in computing power gave them the capability. It is key to systems such as ChatGPT.

Yet the power of the technology also led Professor Hinton, 76, to quit his work with Google to warn humanity of its dangers.

Speaking to the Nobel committee, he said humanity was only just starting to understand its power.

“I think it will have a huge influence. It will be comparable with the Industrial Revolution, but instead of exceeding people in physical strength, it will exceed people in intellectual ability. We have no experience of what it is like to have things smarter than us,” he said.

“It’s going to be wonderful in many respects. In healthcare, it has given us much better healthcare. In almost all industries it is going to make them more ­efficient.

“People are going to be able to do the same amount of work with an AI assistant in much less time. It’ll mean huge improvements in productivity.”

He added: “We also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.”

Asked whether he had regrets, he said: “In the same circumstance I would do the same again, but I do worry the overall consequence of this might be systems more intelligent than us that eventually take control.”

Computer scientist Neil Lawrence of Cambridge University, where Professor Hinton studied, said: “For me, Hinton’s been a total inspiration. (He) is great because of his ability to inspire multiple fields to come together.”

He said, however, that he did not believe AI was a threat in the way Professor Hinton believed.

“I share some of Geoff’s concerns about the societal aspects of these technologies but disagree on the origin of the problems and the form of the solutions,” he said.

The Times

Here is the link:

https://www.theaustralian.com.au/world/the-times/physics-nobel-winner-brian-hinton-issues-aialert/news-story/e883f04be4db8c6e3418609f0e9b20a5

Well, who am I to quibble? We need to stay alert and make sure, as best we can, that innovations are used overall for good!

David.

No comments: