An accounting professional working on a client’s corporate tax return looks at a computer screen and sees a document that appears real, reads as if it’s real and contains information that seems real. But it’s not real. Instead, it’s an artificial intelligence fake. Increasingly sophisticated impersonation is becoming the AI tactic driving theft strategy.
Much time and effort are deservedly put toward warning people about AI scams, but financial professionals are targets, too. Criminals are the human equivalent of water seeking the path of least resistance. They’re searching for money through any crack, hole or seam in an individual’s or business’s defenses. As criminals go where the money is, one of those places is accounting firms, which work with clients' financial information.
The Security Summit is a coalition of the IRS, tax professionals, industry partners and state tax groups. In a news release from its recent annual meeting, the Security Summit in sounded a red alert on AI and accounting firm-targeted scams:
“Identity thieves are taking numerous approaches to steal sensitive information from tax professionals. This includes posing as new clients, using phishing emails to trick people into sharing Central Authorization File information as well elaborate schemes involving calling and texting. Tax professionals need to be on the lookout to avoid falling prey to these attacks, which threaten not just their clients but their businesses.”
In what seems to be a script torn out of a “Terminator” movie page, AI systems are being used to teach each other how to better overcome cybercrime defenses. The Journal of Accountancy described the new type of AI-on-AI training system:
“In a recent development, criminals have turned to generative adversarial networks (GANs), which use two neural networks; i.e., they are basically two AI systems working in conjunction. The criminals train one of the networks to generate false information, while the other is designed to detect the false information. They are used to train each other, continually creating better means of evading detection.”
To show that even professionals can fall victim, my June column, "AI deepfakes: The non-person talking to you wants to steal your money," recounted how a Hong Kong multinational corporation employee was directed to transfer $25 million to several bank accounts following a conference call with several company officials, including the company’s chief financial officer. The problem: everyone on the call – except the employee being scammed – was an AI “deepfake.” They were AI-generated impressions of the real people.
Fakes are coming in the form of documents, video and voice replication. The call an accountant receives may sound exactly like a client asking for a particular piece of information they’ve misplaced; however, it might be a voice deepfake, the technology of which is becoming increasingly advanced.
The suggestion for businesses and individuals alike is this: talk to your accounting firm about how they are protecting themselves, and you, from AI scams in this continually evolving process. We’re all in the position of a football defensive cornerback: the receiver knows the play, the route he’ll run and where he should get the ball: the defensive back is anticipating and reacting to defend against the receiver catching the pass. The criminals know where they plan to go. Honest enterprises are on the defensive. Honest people are under daily attack.
Asking how your accountants are accounting for AI should not be a problem for them. When you’re playing to win, you want the best possible defense.
This article first appeared in Knox News.