Hi, Yves. Richard Murphy has a succinct and excellent explanation of the inherent limitations of AI, especially in professional roles (he focuses on accounting and tax, but the same arguments apply to healthcare and law). A big limitation I pointed out years ago when data mining significantly reduced entry-level jobs is that low-level chores like legal research teach new professionals the basics of the job. If you skip that, they will be undertrained. I experienced it in my Stone Age of youth. I was in the last group of Wall Street newbies who manually created spreadsheets and pulled data from hard copies of SEC filings and annual reports. I found that my juniors, who downloaded data from Computstat that was sometimes erroneous but never corrected, had a much lower understanding of how the company’s finances worked.
By Richard Murphy, Adjunct Professor of Accounting Practice at the School of Management, University of Sheffield, Director of the Corporate Accountability Network, Member of Finance for the Future LLP, and Director of Tax Research LLP. Originally published in Funding the Future
summary
Although AI has potential, I believe it cannot replace human judgement and skill in many professions, such as education, medicine, and accounting.
While AI may be able to automate certain tasks, it lacks the ability to interpret non-verbal cues and understand complex real-world problems.
Experts need experience and training to provide human solutions, but AI has limitations that make it a poor substitute for deep human interaction and expertise.
of Gaby Hinsliff of The Guardian wrote in a column: Published yesterday:
The idea of harnessing technology as a magic bullet to enable governments to do more with less has become increasingly central to Labour’s plans to revive Britain’s public services as Rachel Reeves hints at a painfully tough budget. In a series of back-to-school interventions this week: Keir Starmer promised “We will push for the full potential of AI,” said Science Secretary Peter Kyle, arguing that automating some routine tasks such as marking could free up valuable time for teachers to teach.
She’s right, this is a Labour obsession.The move seems to come from the Tony Blair Institute, whose director has long misunderstood the capabilities of technology and seems largely unaware of it.
The specific issue she mentioned was the use of AI for educational purposes: its proponents believe that it will enable them to create bespoke programs for each child; as Gaby Hinsliff points out, this idea has so far failed.
Of course, I acknowledge the fact that most innovations have to fail before they succeed — that’s just how it generally works. So it would be unwise to say that just because AI hasn’t solved the problem yet, it can’t. But as I’m actively incorporating AI into my workflow, I see big problems with much of what workers and other people are doing.
The labor market’s immediate reaction to AI seems to be to downgrade the talent they currently need, as employers believe that AI will reduce the demand for skilled talent in the future. And they’re right: the assumption is that specialized skills will be replaced by AI in many fields. Graduates are now being hit hard by this attitude.
For example, in accounting, it is envisioned that AI will be able to answer complex questions so that tax expertise will be less necessary, and similarly, AI will be responsible for creating complex calculations. accountlike Consolidated Accounting Of a corporate group.
People who make such assumptions are incredibly naive. Even if an AI could perform some of these processes, huge problems would result, the biggest of which would be that no one would have the skills to know if what the AI did was correct.
To become good at taxes, you need to read a lot of books about taxes, write a lot of tax writing (usually to advise clients), and correct your work when your boss tells you you didn’t get it right. Human learning is a highly iterative process.
Employers seem to think that they can do away with a lot of this right now. They think so because the people who decided they could do away with training posts have already been there and therefore have the skills to understand the field. In other words, they know what the AI should do. But when the next few people are hired and have similar authority, they don’t know what the AI is doing. They have no skill set to know if it’s right, so they have to assume it’s right.
In this case, the logic of AI advocates is the same as that used by people like Wes Streeton when arguing for the use of physician assistants. Physician assistants are apparently partially trained clinicians who currently work in the NHS and go on to perform operations without any of the depth of knowledge required to perform the tasks required. Physician assistants are trained to answer questions that are given to them. The problem is that the wrong questions may be asked and the physician assistant will flounder and do harm.
The same can be said about AI: AI answers questions that are given to it. The question is, how does AI solve problems that aren’t asked? When it comes to taxes, clients rarely ask the right questions. True professional skill comes from first understanding what the client really wants, then understanding whether what the client wants is sensible, and finally, reframing the question in a way that addresses the client’s needs.
The problem with doing so is that it is all about human interaction, but also requires understanding and appropriate reframing of the entire technical aspects of the issue being considered (which usually involves multiple taxes, plus accounting and often legal), all of which requires a fair bit of judgement.
Do you think AI is up to the task at this point? No, I don’t think so.
Are we confident that AI can perform the task? I have doubts, just as I have doubts about AI’s ability to address many medical and other specialized problems.
Why? Because answering such questions requires the ability to read all of the client’s nonverbal and other signals. The technical part is only a small part of the job, but without knowing the technical elements, no matter how expert you are in any field or skilled in any kind of profession, you can’t frame the question properly or know if the answer you’re providing is correct.
In other words, if young experts are deprived of the opportunity to make all the mistakes in the book, as will happen if they are replaced by AI, they are highly unlikely to become knowledgeable enough to actually solve real-world problems posed by real-world people, because most people who turn to experts for help are not looking for technical answers to their questions.
They want the lights to work.
They want the pain to go away.
They want to pay the correct amount of tax without the risk of making a mistake.
They want to get divorced with the least amount of stress.
The job of the experts is not to tell you how to do these things. It’s to provide human solutions to human problems. You can’t do that unless you understand the human and technical problems at hand. Use AI to do the technical part, and all you’re left with is a warm, empty, meaningless smile that doesn’t reassure anyone.
I’m not saying we shouldn’t use AI. I know we will. But anyone who thinks that AI can replace a large part of human interaction is sorely mistaken. Because humans ask totally illogical questions whose meaning a human has to understand. I believe AI cannot do that.
And that’s why I think Gaby Hinsliff is right when she concludes that “AI has only a limited role to play in the classroom.”
It’s true that AI has the potential to do great good if handled properly, but as Starmer himself has repeatedly said, there are no easy answers in politics, and there don’t seem to be any easy answers when you ask ChatGPT.