Quote Of The Year

Timeless Quotes - Sadly The Late Paul Shetler - "Its not Your Health Record it's a Government Record Of Your Health Information"

or

H. L. Mencken - "For every complex problem there is an answer that is clear, simple, and wrong."

Sunday, January 14, 2024

It Looks Like 2024 Is Really Going To Have Us Face Some Heard Questions In The AI Domain!

This popped up over the break: 

New laws to curb danger of high-risk artificial intelligence

By Lisa Visentin

January 14, 2024 — 5.00am

Legislation will govern the use of artificial intelligence in high-risk settings such as law enforcement, job recruitment and healthcare, with the Albanese government poised to legislate mandatory safety requirements in its first steps towards regulating the technology.

Industry and Science Minister Ed Husic said a new advisory body would work with government, industry and academic experts to devise the best legislative framework and define what constituted high-risk technologies and their applications across industries.

He said the starting point for assessing high risk would be “anything that affects the safety of people’s lives, or someone’s future prospects in work or with the law. Those are areas that are likely to be targeted by future changes.

“These technologies are going to shape the way we do our jobs, the performance of the economy and the way we live our lives. So what we need to ensure is that AI works in the way it is intended, and that people have trust in the technology.”

The move to legislate safeguards for high-risk AI forms a key plank of the government’s interim response to a consultation process launched last year on the safe and responsible use of AI, to be unveiled by Husic this week. The government received more than 500 responses to its discussion paper, including from tech giants Google and Meta, major banks, supermarkets, legal bodies, and universities. It said almost all the submissions called for action on “preventing, mitigating and responding” to the harms of AI.

Many countries are grappling with how to respond to the emerging risks from AI without stifling innovation, as the rampant take-up of generative chatbots like ChatGPT and automated machine learning systems has highlighted the potential for the technology to revolutionise entire industries. Consulting firm McKinsey has calculated the technology could add between $1.1 trillion and $4 trillion to the Australian economy by the early 2030s. website called ChatGPT is raising questions about the role of artificial intelligence in our education, work and relationships.

But the breakneck speed of AI development has triggered concern over the potential ethical and moral harms posed by its use and has led to a furious debate in tech circles. Last year, more than 350 researchers and executives working in AI, including ChatGPT creator and OpenAI chief executive Sam Altman, signed an open letter warning that mitigating the “risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Husic said the decision to focus on high-risk technologies while leaving low-risk applications unregulated was driven by the view that the vast majority of AI use was safe and driving major advancements in areas like education and medicine.

The government’s approach, at this stage, does not include mirroring steps like those taken by the European Union which, through its proposed AI Act agreed to by policymakers in December, will seek to ban certain uses of AI that pose an “unacceptable risk”. These include the creation of social credit-scoring systems, biometric profiling, and scraping of images from the internet to create facial recognition databases.

But Husic said the government would keep an open mind to stricter regulatory responses and was watching developments in other countries.

“If other jurisdictions like the EU are doing things that we think will work locally, then we’re very open to it. If there is a potential impact on the safety of people’s lives or their future prospects then we will act,” he said.

The new AI advisory body will consider options for the types of legislative safeguards that should be created and whether these should be achieved through a single AI Act or by amending existing laws.

One issue it will likely consider is how best to safeguard against algorithmic bias arising from incomplete data sets that can discriminate against people based on characteristics such as race or sex, which the Industry Department’s discussion paper highlighted as a major concern. The paper referenced overseas cases of law enforcement agencies relying on AI models to predict recidivism that disproportionately targeted minority groups, and employers using recruitment algorithms that prioritised male over female candidates.

Here is the link:

https://www.smh.com.au/politics/federal/new-laws-to-curb-danger-of-high-risk-artificial-intelligence-20240111-p5ewnu.html

There was also an associated editorial! 

 Government must meet challenge of identifying high-risk areas of AI

By The Herald's View

January 14, 2024 — 5.00am

The consensus of thinking around artificial intelligence (AI) is that 2024 is the year when we will really start to feel its impact.

The abilities of ChatGPT to write poetry in the style of William Wordsworth, for example, or to compose computer coding is now familiar. The useful and commercial potential of AI when it comes to medicine and science is rapidly being investigated and exploited. Researchers are now exploring whether AI can be used to interpret the results of bowel cancer scans faster and more accurately than a human.

Much like the impact of the launch of the World Wide Web in 1993, AI comes with very obvious and immediate benefits as well as yet-to-be-realised and potentially detrimental ramifications.

Apposite, then, that the federal government will take steps to endeavour to control the impacts of AI applications after a consultation process it began last year, seeking views on whether Australia has the right governance in place to support the safe and responsible use and development of AI.

As our political correspondent Lisa Visentin writes today, more than 500 submissions were received in response to the government’s discussion paper, and almost all of them called for the government to act on “preventing, mitigating and responding” to the harms of AI.

As a result, the government will this week unveil its interim response, which will include mandatory safety guardrails for AI in high-risk settings. The government concedes: “When AI is used in high-risk contexts, harms can be difficult or impossible to reverse such that specific guardrails for AI design, development, deployment and use may be needed.”

That, it says, could mean amendments to existing laws or through a dedicated legislative framework to be implemented by the establishment of a new expert advisory body.

The government recognises the upside of AI; economic modelling estimates that adopting AI and automation could add an extra $170 billion to $600 billion a year to Australia’s GDP by 2030.

In formulating a policy response, the federal government recognises the vast majority of AI uses are low risk and says legislation that targets high-risk areas allows the low risk to flourish unimpeded.

But therein lies the challenge. We want to enjoy the benefits of AI, but this brings with it the need and responsibility to identify high-risk areas that could have big impacts, particularly irreversible ones.

The European Union is further down the path of legislation, reaching a provisional agreement on the Artificial Intelligence Act to ensure AI in Europe “is safe, respects fundamental rights and democracy, while businesses can thrive and expand”.

The list of banned applications, exemptions, obligations for high-risk systems and proposed guardrails is a complex one.

The definition of high risk includes AI systems with significant potential harm to health, safety, fundamental rights, environment, democracy and systems used to influence the outcome of elections and voter behaviour.

The proposed EU legislation means citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk systems that affect their rights.

Infrastructure systems (water, gas, electricity), systems determining access to educational institutions or recruitment, law enforcement, border control, administration of justice, biometric identification and emotion recognition are also areas deemed high risk.

Brando Benifei of Italy’s Partito Democratico said after the agreement: “Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology. Correct implementation will be key.”

The Australian government is to be applauded for its early recognition of the potential risks posed by AI. We believe it should now move forward with equal expediency in closely identifying and ring-fencing the high-risk areas. Sufficient financial resources must be given to ensure that it can achieve those goals without delay. Australian citizens should expect nothing less.

Here is the link:

https://www.smh.com.au/national/nsw/government-must-meet-challenge-of-identifying-high-risk-areas-of-ai-20240112-p5ewuc.html

I have the feeling tis is going to be a transitional year as many service and entities figure out just what the role of AI will be, what the economic and human impact will be and what the right way to proceed is.

In healthcare clinical decision support (a form of AI) has been around for decades and many are used to the way our various activities and processes are influenced by automated guidance.

In many other disciplines AI is yet to be explored as fully but you can be sure work is underway apace!

Tools like ChatGPT have made a huge impact and opened thinking widely on what is possible to a huge degree. They are also turning out to be useful testbeds.

I expect an ongoing and fascinating news flow all this year and on into the future. Let me know how I went with this one this time next year!

David.

No comments: