- Table of contents
The author
Or Shichrur
Cyber Threat Intelligence | OSINT | Multilingual
Table of contents
Deepfake Cyber Crime to Shift the Cyber Threat Landscape
61% of organizations have seen deepfake incidents increase in the past year – with 75% of these attacks impersonating the CEO or another C-suite executive, according to a recent report by Deep Instinct. Moreover, 97% are concerned they will suffer a security incident as a result of adversarial AI.
DEFINITION:
Deepfakes, generated by advanced AI technology, encompass audio, video, or images crafted with malicious intent to fabricate false scenarios. Threat actors typically disseminate these deceptive materials across social media platforms to manipulate their audience.
Notably, hybrid and remote work environments make employees significantly more susceptible to deepfake social engineering attacks. Reduced in-person interaction means employees may be less likely to consult with colleagues or IT departments about suspicious communications. Additionally, remote workers’ potential reliance on personal phones for work communications makes them prime targets for social engineering attacks.
ELECTION DEEPFAKE ATTACKS: A GAME-CHANGER IN THE GEO-POLITICAL ARENA
Just a few days before Slovakia’s pivotal election in September 2023, suspicious audio recordings spread on Meta platforms. In the first one, the leading candidate seemingly boasted about rigging the election. In the other, he proposed to double the cost of beer. Both went viral, leading to his defeat by a pro-Moscow opponent. It was later concluded by the fact-checking department of AFP that the recordings were maliciously created by weaponizing AI deepfake technology.
In 2024, elections will be held in the UK, France, the US, and other nations. These elections are likely to face more disruption from advanced deepfake materials than ever before, while both media and governments have limited resources and authority to combat misinformation.
BUSINESSES AND CORPORATIONS ARE VULNERABLE TO FINANCIAL LOSSES TOTALLING BILLIONS
While sophisticated phishing, smishing and vishing attacks have been on the rise in recent years, the emergence of deepfake AI technologies poses higher financial risks for businesses.
In September 2023, threat actors impersonated the CFO of a Hong Kong-based multinational company using deepfake technology, deceiving a finance worker into transferring $25 million during a deceptive video conference call.
In another significant deepfake phishing attack, a bank manager in the UAE fell victim to the threat actor’s scam. The hackers leveraged AI voice cloning to impersonate a bank director and lure the bank manager into transferring $35 million.
A GROWING MENACE FOR FINANCIAL INSTITUTIONS AND EMPLOYEES
Cyberint observation indicates both financial institutions and financial employees in non-financial institutions are increasingly vulnerable to deepfake attacks. Financial institutions face heightened risk due to the potential for deepfake technology to bypass KYC (Know-Your-Customer) verification processes. The KYC check is the mandatory process of identifying and verifying the client’s identity when opening an account and periodically over time. In other words, banks must ensure that their clients are genuinely who they claim to be.
By bypassing the KYC verification threat actors can initiate fraudulent activities. Meanwhile, financial employees, regardless of their institution, are prime targets for deepfake attacks designed to deceive them into authorizing large-scale transfers that benefit threat actors. This dual vulnerability underscores the critical need for enhanced security measures and training to combat the evolving threat of deepfakes.
CYBERINT MONITORING REVEALS HACKERS’ IMMINENT INTEREST IN DEEPFAKE TECHNOLOGIES
IMPACT
By impersonating executives and creating fabricated personas through manipulated videos and AI-generated content, these attacks can severely damage a company’s reputation. The dissemination of misinformation via deepfakes can also lead to financial repercussions, eroding trust among investors, customers, and stakeholders and harming the company’s brand and market standing. Moreover, fraudulent customer interactions facilitated by deepfake technology can expose corporations to legal and financial risks. To counter these threats effectively, organizations must implement robust cybersecurity measures and proactive strategies to defend themselves and protect their integrity and trustworthiness.
RECOMMENDATIONS AND COURSES OF ACTION
- Cyberint Persistent Monitoring – monitoring of relevant social media platforms, underground platforms, as well as deep and dark web forums is vital to detect indications and even planned deepfake attacks. Additionally, monitoring the evolving threat landscape of deepfake attacks is imperative. Staying informed about emerging techniques and attack vectors enables organizations to adapt their security strategies and proactively defend against potential threats.
- Removal of Malicious Content – Cyberint offers take down remediation solutions, ensuring the mitigation of impersonating accounts on Social media, as well as other types of fake and/or malicious content online.
- Enforce Deepfake Awareness Training – train and instruct on a regular basis all employees on deepfake risks, as part of their security awareness curriculum. Instruct them how to identify and report such threats, make them aware of indications of fabricated audio/video such as suspicious patterns in speech and videos, lack of emotional tone, lip-sync errors, inconsistencies in facial expressions, etc. Consider conducting simulations to ensure and measure their awareness, as part of the training.
- Implement Multi-layered Authentication – enforce multi-factor authentication (MFA) combining strong and unique password policy together with other biometrics authentication options and/or PINs, for enhanced security.
- Governments and Venture Capital to Invest in Deepfake Detection Solutions – given the significant menace and its impact, it is vital to invest additional funds for AI security research to safeguard organizations and nations against the threat.
- Adapting Innovative Technologies – consider using emerging technologies such as watermarking audio files and photo plethysmography (PPG). For additional information on how to secure voice authentication, see the Cyberint article “Securing Voice Authentication in the Deepfake Era”.