Voice and video scams descend on business owners.
Gone are the days of misspelt text messages and dodgy emails. Artificial intelligence has enabled scammers to take business fraud to jaw-dropping new levels.
The latest threats are live video and audio clones of staff being used to trick firms into transferring millions into the hands of fraudsters. These deepfake clones – usually of high-ranking employees or business owners – are generated by AI programs, using voice and video samples often downloaded from the company’s own website or social media channels. In one recent case, an engineering firm lost $38 million to a sophisticated live video scam featuring an AI clone of the company’s CFO (see AI Fraud Frontier below).
And business owners who assume this can only happen to big businesses need to think again, with an expert predicting tailored voice and video scams are about to hit Australian SMEs hard.
Swinburne University of Technology marketing lecturer Dr Lucas Whittaker first warned about the risk posed from AI deepfakes back in 2022 when he co-authored the paper Brace Yourself! Why managers should adopt a synthetic media incident response playbook in an age of falsity and synthetic media.
Since then, the threat has only escalated, Dr Whittaker says, as technology has advanced to the point voice cloning can be achieved with as little as three seconds of audio. These technologies are something many consumers can utilise at home, without the need for much expertise.
“You no longer need a lot of technical knowledge because these tools have become increasingly accessible, yet able to generate relatively sophisticated output,” he says. “Nowadays, anyone can go to an app or online web service and very cheaply generate an AI voice model of yourself – or any person – and you only need a couple of seconds of audio data to generate a passable model.
“Once a voice or video clone has been created, it can be used to voice any text script.
“We’re seeing it happening already; fraudsters cloning the voices of family members from social media posts and then using that in messages to parents or relatives asking them to send money urgently because they’re in trouble.
“But perhaps the most alarming development has been the emergence of video and voice clones that can operate in real time – rather than in pre-recorded grabs – to hold short conversations with victims.
“There were limitations in the past when using real-time deepfakes because there were significant processing power and latency factors involved,” Dr Whittaker explains. “But as the technology continually evolves we’re going to be seeing a lot more of these real-time video and voice deepfake clones, and they’re going to be harder to detect.
“The frequent use of social media channels to share marketing content – including seminars, interviews and how-to-videos – provides scammers with an abundance of audio and video samples to launch tailored attacks on SMEs.
“Small businesses can be more at risk because you actually have a much more predictable interaction process within a smaller organisation. That can make it easier to personalise an attack to target a particular receptionist, for example, because a malicious actor will know that they will usually answer the phone,” he says.
Greater familiarity also means SME staff are more likely to recognise, or think they recognise, a colleague’s voice without question.
Minimising risk
While businesses increase their vulnerability to AI phishing attacks by posting content online, Dr Whittaker says there’s little option in the modern world. The eventual solution partially lies in platform providers developing audio and video watermarking technology to prevent content being misused.But, until that happens, SMEs can minimise risk within their organisations through the following steps:
1. Review use of voice and facial recognition:
Biometric authentication using face and voice prints has been cracked by AI clones. Journalists last year reported using AI voice clones to access bank accounts and Australian Tax Office accounts. Over-reliance on biometrics could be a vulnerability.
2. Educate and update staff:
SMEs should ensure all staff – particularly those gatekeeper roles, such as executive assistants and financial controllers – are aware of the latest deepfake capabilities.”I think older generations, or those perhaps not as engaged in these sorts of topics, are probably more at risk because they don’t have that media literacy to a) know what is possible and b) look for red flags to detect these sorts of attacks,” Dr Whittaker says.Provide employees with examples such as this 60 Minutes America demonstration of how a voice clone can be created in minutes and, a real vs cloned voice here.Equally, complacency can be an issue, with studies indicating people routinely underestimate risk and overestimate their ability to spot a fake.Set a Google Alert for audio and video clone scams and share recent news.
3. Develop protocols for confirming identity:
Many voice phishing, or ‘vishing’, scams aimed at businesses rely on impersonating authority figures. Encourage staff to question and seek confirmation of identity, even of senior staff. Don’t rely on recognising phone numbers as these can be spoofed by scammers.
4. Multi-layer authentication and approvals:
Multi-factor authentication should be standard. Similarly, there should be a chain of approvals required for the transfer of sensitive data or large amounts of cash. Dr Whittaker also suggests a mix of online and offline approvals and authentications.
Spotting a fake
One tell-tale sign to watch for with video clones is image warping, Dr Whittaker says, particularly when hands are on or near the face. On video calls, he recommends asking participants to turn their heads from side to side and wave a hand in front of their face while looking for signs of warping.
On audio calls, listen for unusual inflections and beware excuses for using unusual channels (such as a non-company or irregular mobile or email.) Fraudsters rely on creating a sense of panic or urgency.
Dr Whittaker cautions that as AI technology continually improves and deepfake attacks become more sophisticated, identifying such red flags will become significantly more challenging.
This blog post was originally published by our valued member firm, Power Tynan.