Friday, June 08, 2018

AI Continues To Break Out All Over And So Do Its Implications.

There have been all sorts of articles on AI appear this week that are worth noting.
First we have Henry Kissinger…
  • Jun 1 2018 at 9:00 AM

Human society isn't ready for artificial intelligence

Is it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?  
by Henry Kissinger
Three years ago, at a conference on trans-Atlantic issues, the subject of artificial intelligence appeared on the agenda. I was on the verge of skipping that session – it lay outside my usual concerns – but the beginning of the presentation held me in my seat.
The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which colour he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilises his or her opponent by more effectively controlling territory.
The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go's basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world's greatest Go players.
As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practising statesman gave me pause. What would be the impact on history of self-learning machines – machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?
Aware of my lack of technical competence in this field, I organised a number of informal dialogues on the subject, with the advice and co-operation of acquaintances in technology and the humanities. These discussions have caused my concerns to grow.

Age of Reason

Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematised in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.
But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.
The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet's purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data becomes regnant.
Users of the internet emphasise retrieving and manipulating information over contextualising or conceptualising its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalise results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.
Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by travelling a lonely road, which is the essence of creativity.
The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialised purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world's emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences.
Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context.
Vastly more here:
Additionally we have this interesting view:

Rule by AI? There’s too much legacy stupidity

  • Allan Waddell
  • The Australian
  • 12:00AM May 29, 2018
Elon Musk can’t be wrong about artificial intelligence, can he? Well, actually, yes he is wrong.
The idea behind the existential threat of AI is alternatively called the “singularity” or the “intelligence explosion” and it’s gaining some serious momentum. There are more transistors today than there are leaves on all the trees in the world, more than four billion alone in the smartphone you carry. Is digital eclipsing the “real” world?
A little history about this idea of our impending extinction at the hands of our silicon creations. Princeton’s flamboyant and brilliant John von Neumann coined the term “singularity” in terms of AI in the 1950s. The term “intelligence explosion” might have come from mathematician I.J. Good in 1965. In Advances in Computers, vol. 6, Good wrote of an ultra-intelligent machine that could design even better machines, creating an “intelligence explosion” that would leave the intelligence of man far behind. He warned, “the first ultra-intelligent machine is the last invention that man need ever make”.
Ray Kurzweil, a futurist and inventor who pops an endless regime of vitamins so he will stay alive long enough to upload himself into the cloud and is also director of engineering at Google, says human-level machine intelligence might be just over 10 years away. He predicts 2029 for the year an AI passes the Turing test and marked 2045 as the date for the singularity. Computer scientist Jurgen Schmidhuber, whom some call the “father of artificial intelligence”, thinks that in 50 years there will be a computer as smart as all humans combined. And Musk, technology ringmaster extraordinaire of PayPal, SpaceX, Tesla and, recently, OpenAI, while not quite as prone to specific-date-prophesying, calls artificial intelligence a “fundamental existential risk for human civilisation”.
Much more here:
And for some good news we had this:

Kaiser to advance use of AI to personalize care interventions

Published May 31 2018, 4:55pm EDT
Kaiser Permanente is teaming with machine learning vendor Medial EarlySign on an initiative to use artificial intelligence to identify which Kaiser patients are most likely to respond to care interventions.
Kaiser has previously worked with the vendor in a project to identify patients with the highest risk of having undiagnosed colorectal cancer while in the early stages of the disease.
Now, Kaiser will use Medial EarlySign’s AlgoAnalyzer platform to stratify populations. It will enable the use of multiple sets of different types of algorithms that can help clinicians, care managers and population health managers predict risk in a population or an individual.
More here:
and this:

AI outperforms dermatologists in diagnosing skin cancer

Published May 30 2018, 7:22am EDT
Dermatologists are no match for artificial intelligence when it comes to diagnosing skin cancer, according to a new study by researchers in the United States, France and Germany.
The international team trained a convolutional neural network (CNN) to identify skin cancer by showing it more than 100,000 images of malignant melanomas as well as benign moles.
Specifically, they trained and validated Google’s Inception v4 CNN architecture using dermoscopic images at a 10-fold magnification and corresponding diagnoses. Then, they compared its performance with that of 58 international dermatologists from 17 countries—including 30 experts with more than five years of experience.
Results of the study, published this week in the Annals of Oncology, show that the CNN missed fewer melanomas and misdiagnosed benign moles as malignant less often than the group of experienced dermatologists.
More here:
So much going on, so little time to catch up with it all!
David.

4 comments:

  1. Artificial Intelligence isn't the problem, it's normal human intelligence.

    'White-hot anger': St Vincent's Hospital email gaff reveals confidential employee details

    https://www.smh.com.au/national/nsw/white-hot-anger-st-vincent-s-hospital-email-gaff-reveals-confidential-employee-details-20180608-p4zkcs.html

    And ADHA thinks that myhr is secure and protected...

    Trouble Tim doesn't understand enough to be worried.

    ReplyDelete
  2. Great articles David. We do indeed live in interesting times and times that probably need a little bit more thought before opening certain doors. It’s is not the technology that worries me, it is what people do with it. Black powered can be both beautiful and devastating as a technology

    ReplyDelete
  3. The Black Mirror series is a great watch on some up and coming/future technologies and how they could be regulated or misused.

    ReplyDelete
  4. Thanks Peter. It reminded me of an article a few years back - https://ai.google/research/pubs/pub43146


    Abstract

    Machine learning offers a fantastically powerful toolkit for building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt, we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is highlight several machine learning specific risk factors and design patterns to be avoided or refactored where possible. These include boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, changes in the external world, and a variety of system-level anti-patterns.

    ReplyDelete