This blog is totally independent, unpaid and has only three major objectives.
The first is to inform readers of news and happenings in the e-Health domain, both here in Australia and world-wide.
The second is to provide commentary on e-Health in Australia and to foster improvement where I can.
The third is to encourage discussion of the matters raised in the blog so hopefully readers can get a balanced view of what is really happening and what successes are being achieved.
Quote Of The Year
Quotes Of The Year - Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"
H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."
Friday, June 08, 2018
AI Continues To Break Out All Over And So Do Its Implications.
There have been all sorts of articles on AI appear this week that are worth noting.
First we have Henry Kissinger…
Jun 1 2018 at 9:00 AM
Human society isn't ready for artificial intelligence
Is it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?
by Henry Kissinger
Three years ago, at a conference on trans-Atlantic issues, the subject of artificial intelligence appeared on the agenda. I was on the verge of skipping that session – it lay outside my usual concerns – but the beginning of the presentation held me in my seat.
The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which colour he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilises his or her opponent by more effectively controlling territory.
The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go's basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world's greatest Go players.
As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practising statesman gave me pause. What would be the impact on history of self-learning machines – machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?
Aware of my lack of technical competence in this field, I organised a number of informal dialogues on the subject, with the advice and co-operation of acquaintances in technology and the humanities. These discussions have caused my concerns to grow.
Age of Reason
Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematised in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.
But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.
The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet's purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data becomes regnant.
Users of the internet emphasise retrieving and manipulating information over contextualising or conceptualising its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalise results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.
Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by travelling a lonely road, which is the essence of creativity.
The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialised purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world's emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences.
Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context.
Elon Musk can’t be wrong about artificial intelligence, can he? Well, actually, yes he is wrong.
The idea behind the existential threat of AI is alternatively called the “singularity” or the “intelligence explosion” and it’s gaining some serious momentum. There are more transistors today than there are leaves on all the trees in the world, more than four billion alone in the smartphone you carry. Is digital eclipsing the “real” world?
A little history about this idea of our impending extinction at the hands of our silicon creations. Princeton’s flamboyant and brilliant John von Neumann coined the term “singularity” in terms of AI in the 1950s. The term “intelligence explosion” might have come from mathematician I.J. Good in 1965. In Advances in Computers, vol. 6, Good wrote of an ultra-intelligent machine that could design even better machines, creating an “intelligence explosion” that would leave the intelligence of man far behind. He warned, “the first ultra-intelligent machine is the last invention that man need ever make”.
Ray Kurzweil, a futurist and inventor who pops an endless regime of vitamins so he will stay alive long enough to upload himself into the cloud and is also director of engineering at Google, says human-level machine intelligence might be just over 10 years away. He predicts 2029 for the year an AI passes the Turing test and marked 2045 as the date for the singularity. Computer scientist Jurgen Schmidhuber, whom some call the “father of artificial intelligence”, thinks that in 50 years there will be a computer as smart as all humans combined. And Musk, technology ringmaster extraordinaire of PayPal, SpaceX, Tesla and, recently, OpenAI, while not quite as prone to specific-date-prophesying, calls artificial intelligence a “fundamental existential risk for human civilisation”.
Kaiser Permanente is teaming with machine learning vendor Medial EarlySign on an initiative to use artificial intelligence to identify which Kaiser patients are most likely to respond to care interventions.
Kaiser has previously worked with the vendor in a project to identify patients with the highest risk of having undiagnosed colorectal cancer while in the early stages of the disease.
Now, Kaiser will use Medial EarlySign’s AlgoAnalyzer platform to stratify populations. It will enable the use of multiple sets of different types of algorithms that can help clinicians, care managers and population health managers predict risk in a population or an individual.
Dermatologists are no match for artificial intelligence when it comes to diagnosing skin cancer, according to a new study by researchers in the United States, France and Germany.
The international team trained a convolutional neural network (CNN) to identify skin cancer by showing it more than 100,000 images of malignant melanomas as well as benign moles.
Specifically, they trained and validated Google’s Inception v4 CNN architecture using dermoscopic images at a 10-fold magnification and corresponding diagnoses. Then, they compared its performance with that of 58 international dermatologists from 17 countries—including 30 experts with more than five years of experience.
Results of the study, published this week in the Annals of Oncology, show that the CNN missed fewer melanomas and misdiagnosed benign moles as malignant less often than the group of experienced dermatologists.