Tumbling further down the AI rabbit hole
As AI (Artificial Intelligence) grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder. What happens if AI becomes more intelligent than humans – do we need to concern ourselves to prevent this from happening?
6 November 2024
ALAN HAYES
THE race is now on for tech companies, both existing and start-ups, to develop ever more sophisticated algorithms to control everything from increased automation for certain jobs to gender and racially biased algorithms or autonomous weapons that operate without human oversight (to name just a few) – unease, rightfully so, abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of. How far down the rabbit hole will we tumble before it’s too late?
What many Australians still do not realise is that Australian banks are leading the way in developing AI. Their increasing reliance on technology – allegedly improving operational innovation and fraud detection – will do little to improve personalising customer interactions and provide improved customer experience. Notwithstanding the fact that human interaction evaporates into the ether, so that greedy banks can maximise on profits - human financial analysts bring creativity and critical thinking, which AI doesn't possess.
AI is only as good as the data it's trained on. If biased data is used, it can perpetuate unfair lending practices. For instance, an AI model trained on historical data that disproportionately denied loans to certain demographics might continue that bias in its calculations, often because those borrowers have limited credit histories.
The reality that banks, and other financial institutions, need to come to grips with is that borrowers mistrust ‘generative AI’ to select their mortgages. Most borrowers still value the human touch throughout the home loan process. Why? Because home loan customers are making one of the most important financial decisions of their lives, and most people still want to talk to a person about this decision.
The problem for a person applying for a generative AI loan is that it depends upon ‘Key Words’ within the algorithm being used. Depending on how the algorithm was written and developed, unless you are aware of what key words should be used, you application will more than likely be rejected – under the human process the chance of approval is greater.
It’s not only the banks racing headlong into AI, which will see more and more people, just like Alice in Wonderland chasing the ‘White Rabbit’, it’s initially the loss of jobs, such as data analysis, bookkeeping, and basic financial reporting. All are highly susceptible to automation - they will just disappear. Why? Because those types of jobs will become prime candidates for AI-driven efficiency improvements that is driven by a lack of emotion and creativity.
The burning question that still hasn't been answered: "With the loss of human intervention in financial reporting, will you be penalised with an incorrect credit score?" AI in your credit report could become a foe that is almost impossible to correct, because of the complexity of AI algorithms – making it difficult to understand how your credit score was calculated and to pinpoint and dispute errors.
What is even more alarming, is that more and more employment recruiters have also jumped on the ‘AI Bandwagon’ and are now using AI for job applications. The same ‘Key Word’ problem that you face when making a loan application is inherent within the recruitment process – job applications are dismissed ‘out-of-hand’ because they are never seen by a human being.
In its most comprehensive survey to date, our corporate regulator, the Australian Securities and Investments Commission (ASIC), found there was a “rapid acceleration” in the number of AI uses, but also a shift towards “more complex and opaquer” ones.
'Booming but lacking sophistication’, is how ASIC describes the way that the country’s financial services sector is deploying artificial intelligence, warning of the risks – from bias to fake information – that are being introduced with little concern or morality to keep them in check.
The concern with the rapid increase in AI applications for loans, employment and everything in-between is that AI algorithms can perpetuate existing biases present in the data they're trained on. This can lead to unequal learning experiences, to the detriment of individuals seeking a loan or employment. It may also disadvantage persons from certain backgrounds. Without a discerning approach, AI tools may continue to propagate these tendencies.
Is AI morally wrong?
The ‘elephant is the room’ that needs to be addressed, is AI morally wrong? Key ethical issues associated with AI is bias and fairness - AI systems can inherit and even amplify biases, resulting in unfair or discriminatory outcomes, particularly in hiring, lending, and law enforcement applications.
AI systems often require access to large amounts of data, including sensitive personal information. The ethical challenge lies in collecting, using, and protecting this data to prevent privacy violations, which raises the problem of autonomy and control.
As AI systems become more autonomous, concerns about the potential loss of human control exist. This is especially relevant in applications like autonomous vehicles and military drones, where AI systems make critical decisions.
Automation through AI quickly leads to job displacement and economic inequality – an issue that has already become reality and reported to the Grapevine, is a 43-year-old person ending up on the ‘human scrapheap’ because job reassessment was evaluated by a ‘microchip’, whose primary function had been automated to certain ‘key words’.
So, what happens when AI makes a mistake? Who is accountable and who accepts liability? Determining who is responsible when an AI system makes a mistake or causes harm will be difficult.
As we have already seen in the media, and reported on below, establishing clear lines of accountability and liability will have the spin doctors and excuses setting the sky on fire – especially when it comes to ethical AI in healthcare, which has more legs than a centipede.
The use of AI healthcare diagnostic tools and treatment recommendations, raises ethical concerns related to patient privacy, data security, and the potential for AI to replace human expertise.
But it’s not just healthcare, finance and banking that need to be held accountable in the charge to build a new AI world, it’s our governments as well – criminal justice heads the list. The use of AI for predictive policing, risk assessment, and sentencing decisions can perpetuate biases and raise questions about due process and fairness.
And what about security and misuse? There’s no doubt that AI can be used for malicious purposes, such as cyberattacks, deepfake creation, and surveillance. As AI technology has become more accessible, the number of people using it for criminal activity has risen. But how can this be controlled? It seems that it can’t - as quickly as the ‘tech-heads’ have a ‘wet dream’ over their latest AI baby, another ‘technophile’ will have developed a ‘work-around’ for nefarious gain.
Is there an answer?
It’s not just the ‘human scrapheap’ that AI will proliferate, it’s the manipulation of AI tools to clone voices, generate fake identities and create convincing phishing emails—all with the intent to scam, hack and steal a person's identity or to compromise their privacy and security.
So, what can be done?
As of 1 September 2024, Australian Public Service (APS) agencies began implementing a Policy for the responsible use of AI in government. This was in in line with Australia’s eight Artificial Intelligence (AI) Ethics Principles, designed to ensure AI is safe, secure and reliable both in government and private industry.
Yet despite the reassurances from the Federal Government, Australians are being unknowingly manipulated so that private industry can develop AI platforms - private data has been sold! Why? Because the adoption of the principles are entirely voluntary. They are designed to prompt organisations to consider the impact of using AI enabled systems, not to adhere to Australia’s eight Artificial Intelligence (AI) Ethics Principles, which are:
So, the question that needs to be asked: “What is the point of these principles when they can be openly flaunted?”
This problem became evident, when recently a leaked email from tech company harrison.ai to investors shows an executive blaming the radiology chain I-MED for concerns raised about using private medical data to train AI without the patient's knowledge.
Harrison’s chief operating officer, Peter Huynh claimed that the company had obtained ethics approval for its clinical studies and complied with the law. Yet the company has publicly failed to provide a copy of its ethics approval applications or approvals, which lists how consent was sought or waived for its experiment.
In a statement on its website I-Med said: "In 2020, I-MED Radiology embarked on a project with Annalise.ai (now fully owned by Harrison.ai), focused on improving patient health outcomes, supporting accurate diagnoses and quality of care for not just our patients but the wider Australian community. As part of that project, I-MED de-identified data using best practice frameworks developed by the CSIRO and the Office of the Australian Information Commissioner."
As one reader, who emailed the Grapevine and asked if we going to cover the issue, succinctly put it: "The issue is how the AI was created. Why this matters is because not every potential use of sensitive data is one that people might feel really good about. For example, if I-MED was giving this data to insurers, who decide to jack up premiums for people with gnarly looking chest x-rays it becomes a real problem.
"Maybe there's a case to say that Australian privacy regulations adds friction to important research. But the way in which this data was obtained alarmed experts, and seems to have alarmed many others too."
It would seem, however, that at least principle six of ‘Australia’s eight Artificial Intelligence (AI) Ethics Principles’ has not been adhered too.
I-Med say that they care for over three million patients each year, yet they surreptitiously used their data. Had there have been the courtesy of a request to use this data, how many patients would have agreed?
The fact is that while AI may bring incredible intelligence, it is still really dumb! It may be able to crack quantum physics, but it cannot be taught the simplest of tasks. While it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing chess, it is difficult or impossible to give a computer the skills of a one-year-old when it comes to perception and mobility.
The reality that we, as a society, now face, in addition to a more existential threat of AI, is the loss of our privacy and security.
The gates have now been flung open for Big Brother to ‘storm the castle’ and enforce a dystopian society!