I noted this last week.
New insulin pump flaws highlights security risks from medical devices
Attackers exploit flaws in the Animas OneTouch Ping insulin pump system to deliver dangerous insulin doses
Medical device manufacturer Animas, a subsidiary of Johnson & Johnson, is warning diabetic patients who use its OneTouch Ping insulin pumps about security issues that could allow hackers to deliver unauthorized doses of insulin.
The vulnerabilities were discovered by Jay Radcliffe, a security researcher at Rapid7 who is a Type I diabetic and user of the pump. The flaws primarily stem from a lack of encryption in the communication between the device's two parts: the insulin pump itself and the meter-remote that monitors blood sugar levels and remotely tells the pump how much insulin to administer.
The pump and the meter use a proprietary wireless management protocol through radio frequency communications that are not encrypted. This exposes the system to several attacks.
First, passive attackers can snoop on the traffic and read the blood glucose results and insulin dosage data. Then, they can trivially spoof the meter to the pump because the key used to pair the two devices is transmitted in clear text.
"This vulnerability can be used to remotely dispense insulin and potentially cause the patient to have a hypoglycemic reaction," the Rapid7 researchers said in a blog post.
A third issue is that the pump lacks protection against so-called relay attacks, where a legitimate command is intercepted and then is played back by the attacker at a later time. This allows attackers to perform an insulin bolus without special knowledge, the researchers said.
Lots more here:
There is also coverage on Reuters:
J&J warns diabetic patients: Insulin pump vulnerable to hacking
Johnson & Johnson is telling patients that it has learned of a security vulnerability in one of its insulin pumps that a hacker could exploit to overdose diabetic patients with insulin, though it describes the risk as low.
Medical device experts said they believe it was the first time a manufacturer had issued such a warning to patients about a cyber vulnerability, a hot topic in the industry following revelations last month about possible bugs in pacemakers and defibrillators.
J&J executives told Reuters they knew of no examples of attempted hacking attacks on the device, the J&J Animas OneTouch Ping insulin pump. The company is nonetheless warning customers and providing advice on how to fix the problem.
"The probability of unauthorized access to the OneTouch Ping system is extremely low," the company said in letters sent on Monday to doctors and about 114,000 patients who use the device in the United States and Canada.
"It would require technical expertise, sophisticated equipment and proximity to the pump, as the OneTouch Ping system is not connected to the internet or to any external network."
A copy of the text of the letter was made available to Reuters.
Insulin pumps are medical devices that patients attach to their bodies that injects insulin through catheters.
The Animas OneTouch Ping, which was launched in 2008, is sold with a wireless remote control that patients can use to order the pump to dose insulin so that they do not need access to the device itself, which is typically worn under clothing and can be awkward to reach.
Jay Radcliffe, a diabetic and researcher with cyber security firm Rapid7 Inc, said he had identified ways for a hacker to spoof communications between the remote control and the OneTouch Ping insulin pump, potentially forcing it to deliver unauthorized insulin injections.
Lots more here:
Clearly this is a serious problem and not one to be ignored.
Interestingly there is a useful idea on reducing the risk from Australia.
Why health implants should have open source code
October 4, 2016 6.14am AEDT
Author
James H. Hamlyn-Harris
Senior Lecturer, Computer Science and Software Engineering, Swinburne University of Technology
As medical implants become more common, sophisticated and versatile, understanding the code that runs them is vital. A pacemaker or insulin-releasing implant can be lifesaving, but they are also vulnerable not just to malicious attacks, but also to faulty code.
For commercial reasons, companies have been reluctant to open up their code to researchers. But with lives at stake, we need to be allowed to take a peek under the hood.
Over the past few years several researchers have revealed lethal vulnerabilities in the code that runs some medical implants. The late Barnaby Jack, for example, showed that pacemakers could be “hacked” to deliver lethal electric shocks. Jay Radcliffe demonstrated a way of wirelessly making an implanted insulin pump deliver a lethal dose of insulin.
But “bugs” in the code are also an issue. Researcher Marie Moe recently discovered this first-hand, when her Implantable Cardioverter Defibrillator (ICD) unexpectedly went into “safe mode”. This caused her heart rate to drop by half, with drastic consequences.
It took months for Moe to figure out what went wrong with her implant, and this was made harder because the code running in the ICD was proprietary, or closed-source. The reason? Reverse-engineering closed-source code is a crime under various laws, including the US Digital Millennium Copyright Act 1998. It is a violation of copyright, theft of intellectual property, and may be an infringement of patent law.
Why researchers can’t just look at the code
Beyond legal restrictions, there’s another reason why researchers can’t just look at the source code in the same way you might take apart your lawnmower. It takes a very talented programmer using expensive software to reverse-engineer code into something readable. Even then, it’s also not a very exact process.
To understand why, it helps to know a bit about how companies create and ship software.
Software starts as a set of requirements – software must do this; it must look like that; it must have these buttons. Next, the software is designed – this component is responsible for these operations, it passes data to that component, and so on. Finally, a coder writes the instructions to tell the computer how to create the components and in detail how they work. These instructions are all the source code – human-readable instructions using English-like verbs (read, write, exit) mixed with a variety of symbols which the programmer and the computer both understand.
Up to this point, the source code is easily understood by a human. But this isn’t the end of the process. Before software is shipped it goes through one final transformation – it is converted to machine code. It now looks like just a lot of numbers. The source code is gone, replaced by the machine code. It’s now a bit like the inside of your car stereo; it “contains no serviceable parts”. Users are not supposed to mess with the machine code.
The full article is found here:
I suspect with what is described above James has just added a very strong additional plank to his argument!
Have anyone seen a plan / comment from the Therapeutic Good Administration on what their plans are in this domain and how they plan to prevent any problems – especially that many of the devices mentioned here are regulated by them.
David.
David, a proper response to these types of situation will require three changes:
ReplyDelete1. a shift from the threat to vulnerability as the basis for understanding risk;
2. adoption of a risk policy (not a risk management policy) that the firm (e.g Johnson & Johnson) that commits the organization to anticipate and avoid catastrophe failures; and,
3. willing participation in a Risk Knowledge Exchange Network for healthcare anchored in the Duty of Care. Such a network would have purposefully constructed Rules of Engagement addressing Disclosure and Deployment of Information, including protection of intellectual property.
The reality of risk lies in the actual cost post hoc of its impact or consequences. Imagine the impact if the device failure had cost the President of the United States his/her life.