Friday, June 29, 2018

Professional Medical Organisations Are Starting To Think About AI And It Implications!

This appeared last week.

AMA issues policy statement on use of ‘augmented intelligence’

Published June 18 2018, 5:20pm EDT
The American Medical Association is weighing in on the potential of what it is calling “augmented intelligence” in fields such as radiology and the practice of medicine by clinicians.
The nation’s largest professional association for medical professionals issued its first policy addressing augmented intelligence at its annual meeting last week, adopting broad recommendations for health and technology stakeholders.
The AMA action cites both the potential of innovation and concern about the role it will play in the practice of medicine and affect patients in the near future.
“As technology continues to advance and evolve, we have a unique opportunity to ensure that augmented intelligence is used to benefit patients, physicians and the broad healthcare community,” says Jesse Ehrenfeld, MD, one of the AMA’s board members.
The technology can assist radiologists, physicians and other medical professionals in making decisions that result in better and more appropriate use of medical care, Ehrenfeld adds.
 “Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone,” he says. “But we must address challenges in the design, evaluation and implementation, as this technology is increasingly integrated into physicians’ delivery of care to patients.”
AMA’s policy says the group will:
• Leverage its ongoing engagement in digital health and other priority areas for improving patient outcomes and physicians’ professional satisfaction to help set priorities for the use of AI in healthcare.
• Identify opportunities to integrate the perspective of practicing physicians into the development, design, validation and implementation of AI in healthcare.
• Promote development of thoughtfully designed, high-quality, clinically validated healthcare AI, which should conform to user-centered design practices, is transparent, conforms to leading standards to engender reproducibility, identifies and avoids bias and avoids introducing healthcare disparities, and safeguards individuals’ privacy and data security.
• Encourage education for patients, physicians, medical students and other professionals to inform others of the promise and limitations of healthcare AI.
• Explore the legal implications of healthcare AI, such as issues of intellectual property, and advocate for “appropriate professional and governmental oversight.”
More here:
We also has this from the UK.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

Natasha Lomas@riptari / Jun 16, 2018
DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.
The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.
While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.
There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.
Lots more here:
As well awareness of other potential complications is rising:

Health execs not ready for societal, liability issues from AI

Published June 19 2018, 7:43am EDT
The vast majority of healthcare organizations lack the capabilities needed to ensure that their artificial intelligence systems act accurately, responsibly and transparently, finds a new survey by consulting and professional services firm Accenture.
AI has the potential to be a transformative technology in healthcare. In the Accenture survey, 80 percent of health executives agree that within the next two years, AI will work next to humans in their organization, as a coworker, collaborator and trusted advisor.
However, 81 percent of health executives say their organizations are not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions, should issues arise, according to Accenture’s Digital Health Technology Vision 2018 report.
Lots more here:
All this is getting more and more interesting!
David.

1 comment:

  1. David, your headline is a bit misleading. It would have been better phrased as "Overseas Professional Medical Organisations Are Starting To Think About AI And Its Implications!"

    Australia thinks the future of health care is a patient controlled document management system.

    ReplyDelete