You’ve maybe read about some of the voice-cloning scams. Criminals can take just a few sentences of someone’s voice, perhaps from a TikTok video, and use artificial intelligence to clone it to say anything they want. In one case, they recorded a boy’s voice from social media and then called his grandparents. The scammer claimed to be an attorney and said that their grandson was being held for committing a crime.
The scammer then said he was handing the phone to their grandson, who frantically pleaded for help. That plea was cloned, of course, but sounded so realistic that when the scammer got back on the call and asked for $15,000, the grandparents gave the money.
Voice cloning can even be used to speak another language. In one creepy instance, a conspiracy theorist used a service from ElevenLabs (ElevenLabs.io) to clone Hitler’s voice and posted clips of him giving a 1939 speech in English. The speech actually elicited empathy from some of the viewers.
And, famously, during the New Hampshire Democratic primary, a robocall that cloned President Biden’s voice discouraged people from voting.
These instances of cloning are examples of “deepfakes,” the term used to describe the use of artificial intelligence to create fake voices, images, and video that are indistinguishable from real media.
It’s not all bad, though.
For example, OpenAI, the creators of ChatGPT, has launched Voice Engine, which can clone one’s voice in English and a range of other languages such as Mandarin, German, French, and Japanese. The Voice Engine creators envision it being used in healthcare and education. Doctors, for example, can use it to talk to patients in their primary languages, including Swahili and Sheng. As I write this, it’s not yet available to the general public, as OpenAI explores ways to keep it from being misused.
AI is also being used nefariously to create deepfake images. Image manipulation has a long history, but in the past it took some skill and a tool like Photoshop. Today, however, even boys in middle school are happily using AI websites to “nudify” photos of their female classmates taken from social media—causing serious distress for the girls and leading to expulsions of the boys.
There’s also an obvious danger of deepfakes influencing elections. Dozens of fake, artificial intelligence-generated photos have been circulating that depict presidential candidate Donald Trump with Black people in an attempt to encourage African Americans to vote Republican.
One image shows Trump smiling with his arms around a group of Black women at a party. Another widely viewed AI image depicts Trump posing with Black voters on a front porch. It was originally created as a satire but then reposted by Trump supporters with a new caption falsely claiming that he had stopped his motorcade to meet with these voters.
These sorts of fakes are especially dangerous in poorer countries, where there’s not as much awareness of image manipulation. Last year in Bangladesh, a nation with a Muslim majority, a video of the opposition leader wearing a bikini sparked outrage among the conservative majority.
Unfortunately, fake videos, too, are becoming easy to create, thanks to AI.
Perhaps the most amazing scam using deepfake video occurred in the Hong Kong office of a multinational finance company, resulting in the scammers garnering the equivalent of over $25 million (HK $200 million). An employee of the company received an email asking him to join a video conference that included the chief financial officer and other employees. They were all deepfakes. They asked the employee to make 15 transfers to five different Hong Kong bank accounts. Which he did.
Governments and the major tech companies are trying to get a handle on this. Since last year, it’s been a criminal offense in the UK to create and share deepfake porn. As I write this, the UK government has announced that simply creating a non-consensual deepfake porn image will be a criminal act, even if it’s not shared. And if it does get shared widely, the result could be jail time.
The U.S. government is considering laws related to AI fraud and intimate images, and at least two dozen states have introduced legislation. Louisiana criminalizes AI-generated sexually explicit images of minors.
Also, 20 leading tech companies have announced an initiative to combat deepfakes, particularly in relation to elections. Known as the Tech Accord, it includes Microsoft, Google, Meta (formerly Facebook), Amazon, Adobe, and OpenAI.
They aim to detect and label deepfakes on their platforms. And they’re working on features such as automatically “watermarking” AI-created content.
What can you do to deal with this issue? I asked ChatGPT for advice, and it replied: verify the source, cross-verify information, be skeptical of sensational content, become media literate, use detection tools such as browser plugins, and stay updated on technology trends.
And my role? To keep you updated!
Find column archives at JimKarpen.com.