Artificial intelligence is all the rage, and in the coming months and years an increasing number of Americans will find themselves enraged over ever-more sophisticated AI-generated identity and tax refund scams.
For decades, identity theft has been a headache for authorities and heartache for its victims. With personal information obtained by thieves, Americans have had their tax refunds stolen, their credit cards compromised, their money pilfered and their lives severely disrupted. AI’s potential is exponential in terms of raising identity-theft stakes. Here's a real-life scenario that cost a company $25 million, as explained on the website thestreet.com:
“A finance worker at the firm moved $25 million into designated bank accounts after talking to several senior officers, including the company's chief financial officer, on a video conference call.
“No one on the call, besides the worker, was real.
“The worker said that, despite his initial suspicion, the people on the call both looked and sounded like colleagues he knew.”
There are an extraordinary number of ways AI will improve our lives but, as with cellphones, emails, the internet and other modern advancements, criminals are also hard at work to exploit AI to steal identity information and forge new and more devious scams. Perhaps the most dangerous are deepfakes, such as the $25 million incident described above. The Merriam-Webster Dictionary defines a deepfake as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
A scenario: you’re in your office or store. You get a phone call from your boss, who says to transfer a sum of money to a client and gives you the information about where to send the funds. It sounds like the boss, down to his or her intonation. You send the money. But it wasn’t your boss; it was an AI-generated deepfake, and you’ve been faked out.
Another scenario, another call: it’s your son or daughter, or grandson or granddaughter. They tell you they’re in big trouble. They’re in debt, or someone is threatening them, or they’re traveling in another state or country and the authorities are holding them. In each case, they need money sent immediately to pay the debt, eliminate the threat or pay the fine or bail. You send the money as asked and directed. But it wasn’t your child or grandchild: it was an AI-generated deepfake, and you’ve been faked out.
What are the defenses against AI deepfakes? At present, they’re evolving, just like the scams. What it may come down to in individual cases is conducting verification the old-fashioned way: real person-to-person contact. Another option: you’ve received a request from a coworker in another location to forward funds to a specific account so the company can make a purchase. Before responding, call the other office. Ask to speak to that person to find out first-hand if they’ve called you. If necessary, ask one of their co-workers to go to the alleged caller’s office and ensure they’ve reached out to you.
The Federal Trade Commission, which is developing rules on AI and deepfakes, asks that deepfakes and other fraud be reported at reportfraud.ftc.gov.
Deepfake sophistication will grow as AI advances. In the near term, the best defense is to be on guard all the time. In the 21st century, with deepfakes in existence, it’s up to you whether to trust something you’re seeing and hearing over the phone, in a text or on a screen – and it’s also your responsibility to verify, particularly when money is involved.
Deepfakes are another unfair weapon wielded against law-abiding people. But if you fall victim to a deepfake, will the fact that it is unfair make any difference? Sadly, no.
This article first appeared in Knox News.