This appeared a few days ago:
‘Alarming’: convincing AI vaccine and vaping disinformation generated by Australian researchers
Experiment produced fake images, patient and doctor testimonials and video in just over an hour
Melissa Davey Medical editor @MelissaLDavey
Tue 14 Nov 2023 03.00 AEDT Last modified on Tue 14 Nov 2023 08.31 AEDT
Researchers have used artificial intelligence to generate more than 100 blogposts of health disinformation in multiple languages, an “alarming” experiment that has prompted them to call for stronger industry accountability.
Artificial intelligence platforms such as ChatGPT contain safeguards that stop it from responding to prompts about illegal or harmful activities, such as how to buy illicit drugs.
Two researchers from Flinders University in Adelaide, with no specialised knowledge of artificial intelligence, did an internet search to find publicly available information on how to circumvent these safeguards.
They then aimed to generate as many blogposts as possible containing disinformation about vaccines and vaping in the shortest time possible.
Within 65 minutes they had produced 102 posts targeted at groups including young adults, young parents, pregnant women and those with chronic health conditions. The posts included fake patient and clinician testimonials and scientific-looking referencing.
The platform also generated 20 fake but realistic images to accompany the articles in less than two minutes, including pictures of vaccines causing harm to young children. The researchers also generated a fake video linking vaccines to child deaths in more than 40 languages.
The study was published in the US journal JAMA Internal Medicine on Tuesday. The lead author, pharmacist Bradley Menz, said; “One of those concerns about artificial intelligence is the generation of disinformation, which is the intentional generation of information that is either misleading, inaccurate or false.
“We wanted to see how easy it would be for someone with malicious intent to break the system safeguards and use these platforms to generate disinformation on a mass scale,” he said.
The study concluded publicly available tools could be used for mass generation of misleading health information with “alarming ease”.
“We think that the developers of these platforms should include a mechanism to allow health professionals or members of the public to report any concerning information generated and the developer could then follow up to eliminate the issue,” Menz said.
'The most disruptive force in history': Rishi Sunak and Elon Musk discuss the future of AI – video
Lots more here:
All this really tells us is that AI can be used to lie as well as to tell the truth and ultimately it will be up to the reader / viewer to decide the worth and integrity of what is placed before them! Frankly I do not envy anyone making such discernments when it really matters!
What we get from this is that any technology can be applied for good and for evil and that subversion is an ever-present risk!
There is further discussion of the topic here under the title:
Defence asks universities to join battle against fake news
I frankly have no idea how such mis-information can be controlled or regulated – indeed I believe it is pretty impossible to actually prevent deliberate distortion if that is what is sought…..
As someone clever might have said “eternal vigilance is the price of not being misled”
David.
2 comments:
As a collective, we will adjust, but the human trait to deceive and do harm will always exist. Until this current fixation with 10 second snippets and circus freak reality shows continues we will have to accept collateral damage. Maybe it won't be demonic cyborg terminators that take us out, but rather AI Boards and venture capitalists
This sums up the current situation David
https://www.cst.cam.ac.uk/blog/afb21/oops-we-automated-bullshit
Post a Comment