Wednesday, February 07, 2024

We Seem To Be Under Attack From AI From All Sides.

This appeared last week:

Special AI laws needed for financial services: Longo

Tom Burton and James Eyers

Jan 31, 2024 – 6.07pm

ASIC chairman Joe Longo says he is concerned about the high risks associated with AI and financial services, and he is unconvinced current rules will be enough to prevent unfair practices.

Risks from data poisoning, model manipulation and generative AI making up answers (so-called hallucinations) were warnings for firms not to become over-reliant on AI models that can’t be understood, examined or explained, Mr Longo warned.

To promote innovation, the federal government has embraced a light-touch voluntary approach to AI oversight, but with special rules to be developed for higher-risk environments such as healthcare and financial services.

Mr Longo endorsed the recent federal AI review’s finding that existing laws do not adequately prevent AI-facilitated harms before they occur.

“It’s clear … that a divide exists between our current regulatory environment and the ideal,” he told an AI governance conference at University of Technology Sydney’s Human Technology Institute on Wednesday.

“There’s a need for transparency and oversight to prevent unfair practices – accidental or intended. But can our current regulatory framework ensure that happens? I’m not so sure,” Mr Longo said.

“Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks? Does it prevent failure to consider emerging risks that the models may not have encountered during training?”

The ASIC chairman said current laws meant there were already obligations for boards of companies using AI applications and that firms should not make the mistake of thinking it was the Wild West as far as AI goes.

“Businesses, boards, and directors shouldn’t allow the international discussion around AI regulation to let them think AI isn’t already regulated. Because it is,” he said. He promised ASIC would continue to act, “and act early” to deter bad behaviour.

But Mr Longo pointed to AI-driven credit checks that arbitrarily blocked customers from getting loans as examples of current rules struggling to be effective and the need for special regulations to stop financial harm.

“Will they [customers] even know that AI was being used? And if they do, who’s to blame? Is it the developers? The company?

“And how would the company even go about determining whether the decision was made because of algorithmic bias, as opposed to a calculus based on broader data sets than human modelling?”

He said AI fraud controls raised similar risks.

“What happens to the debanked customer when an algorithm says so? What happens when they’re denied a mortgage because an algorithm decides they should be? When that person ends up paying a higher insurance premium, will they know why or even that they’re paying a higher premium? Will the provider?”

“...with ‘opaque’ AI systems, the mechanisms by which that discrimination occurs could be difficult to detect.”

“Even if the current laws are sufficient to punish bad actions, their ability to prevent the harm might not be,” Mr Longo said.

The market regulator also raised concerns about AI-powered investment apps.

“When, as a system, it learns to manipulate the market by hitting stop losses, causing market drops and volatility… when there’s a lack of detection systems… yes, our regulations around responsible outsourcing may apply – but have they prevented the harm?

“Or a provider might use the AI system to carry out some other agenda, like seeking to only support related party product, or share offerings in giving some preference based on historic data,” he said.

It was important for policymakers to ask the right questions about AI, Mr Longo said.

“And one question we should be asking ourselves again and again is this: ‘Is this enough?’”

Questions about transparency, explainability and the rapidity of the technology deserved careful attention.

“They can’t be answered quickly or off-hand,” he said. “But they must be addressed if we’re to ensure the advancement of AI means an advancement for all.

APRA, he said, recognised AI’s ability to “significantly improve efficiency” in the system and to improve innovation which was good for customers, but did involve a variety of risks that lenders had to be attuned to.

Mr Lonsdale said it was important for banks to have the right controls for AI apps.

“So all of that is wrapped up in now, in CPS 230 [the new standard on operational risk], which is coming into play, and so, at this point in time, we think we’ve got enough in terms of the regulation to guide through the risks that we’re seeing.”

More here:

https://www.afr.com/politics/federal/special-ai-laws-needed-for-financial-services-longo-20240131-p5f1b4

I found this a fascinating article as it raised so many potential issues and problems I could barely keep up.

I am not sure we can rely on the regulators to catch all potential problems do it will be up to us all to remain ‘alert but not alarmed’!

I hope we can avoid too much in the way of unanticipated fraud!

David.

No comments:

Post a Comment