Prevent deepfake fraud attacks
Skip to main content Skip to main menu Skip to footer

Prevent deepfake fraud attacks

Prevent deepfake fraud attacks

Decrease Text Size Increase Text Size

Page Article

Deepfakes are artificially created images, videos, and audio designed to emulate real human characteristics. Deepfakes use a form of artificial intelligence (AI) called deep learning. Bad actors can leverage the technology behind deepfakes to commit identity theft, blackmail, and fraud.  The best-known type of deepfake is a face-swap video, which transposes one person's facial movements onto someone else's features. Another type of deepfake is voice cloning, which copies a person's unique vocal patterns in order to digitally recreate and alter their speech.

Deep Fake Techniques:

  • Deepfake vishing, a type of narrowcast threat, uses cloned voices for social engineering phone calls. This technique has innumerable applications, including identity theft, imposter scams, and fraudulent payment schemes. A well-crafted deepfake vishing operation can exploit the call recipient’s trust in the impersonated party.  Current technology enables realistic voice cloning, which can be controlled in real-time with keyboard inputs.
  • Fabricated private remarks, a type of broadcast threat, are deepfake video or audio clips that falsely depict a public figure making damaging comments behind the scenes. 
  • Synthetic social botnets, another broadcast threat, are also of primary concern. Fake social media accounts could be constructed from synthetic photographs and text and operated by AI, potentially facilitating a range of financial harm against companies, markets, and regulators.

Scenarios Targeting Individuals

Identity Theft:

  • Deepfake synthesized voice initiating a fraudulent wire transfer.  
  • Deepfake audio or video used to create bank accounts under false identities, facilitating money laundering.
  • Deepfakes in social engineering campaigns to gain unauthorized access to large databases of personal information. A company official might receive a deepfake phone call asking for his username and password. 

Imposter Scams:

  • Criminals impersonate “the government, a relative in distress, a well-known business, or a technical support expert” to pressure the victim into paying money.

Cyber Extortion:

  • Scammers often manipulate victims by threatening imminent harm unless money is paid—for example, claiming the victim or a loved one faces criminal charges or suspension of government benefits. 
  • Scammers might clone the voice of a specific individual, such as a victim’s relative or a prominent government official known to many victims. 
Remote Work Positions:
  • The use of voice spoofing, or potentially voice deepfakes, during online interviews of the potential applicants. In these interviews, the actions and lip movements of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.
  • The use of deep fakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.  Positions may include information technology and computer programming, database, and software-related job functions. Notably, some reported positions include access to customer PII, financial data, corporate IT databases, and/or proprietary information.

Scenarios Targeting Companies

Payment Fraud:

  • Criminals often hack or spoof an email account of a chief executive officer (CEO) and then contact a financial officer to request an urgent wire transfer or gift card purchase. 
  • Criminals may also masquerade as trusted suppliers (using false invoices) or employees (diverting direct deposits).  
  • Deepfakes could make phone calls used in business email compromise schemes sound more authentic. 

How to detect a deepfake video

Poor Production:

  • Jerky movement.
  • Shifts in lighting from one frame to the next.
  • Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, DeepFakes often fail to fully represent the natural physics of lighting.

Facial Features: Facial features are very difficult to perfect, especially for the eyes. If the eyes look unnatural or the movement of facial features seems to be off, chances are good that it is an altered image.

  • Pay attention to the face. High-end deepfake manipulations are almost always facial transformations. 
  • Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? Are there shifts in skin tone? DeepFakes are often incongruent on some dimensions.
  • Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? DeepFakes often fail to fully represent the natural physics of a scene. 
  • Pay attention to the facial hair or lack thereof. Does this facial hair look real? DeepFakes might add or remove a mustache, sideburn, or beard. But, DeepFakes often fail to make facial hair transformations fully natural.
  • Pay attention to facial moles.  Does the mole look real? 
  • Pay attention to blinking. Does the person blink enough or too much? 
  • Pay attention to the size and color of the lips. Do the size and color match the rest of the person's face?
  • Strange blinking or no blinking at all.
  • Poor lip-synch with the subject’s speech.

How to prevent deepfake fraud

  • Vigilance: Pay close attention to suspicious voice messages or calls that may sound like someone familiar yet feel slightly off. In an era of remote work, it is important to question interactions that can impact business vulnerabilities – could it be a phishing or complex social engineering scam?
  • Machine learning and advanced analytics: Fight fire with fire by deploying deepfake detection and analysis.
  • Layered fraud prevention strategy: Use identification checks such as verification, device ID and intelligence, behavioral analytics, and document verification simultaneously to counter how fraudsters may deploy or distribute deepfakes within the ecosystem.
  • Zero Trust: Another way to detect the deepfake from the real is to use cybersecurity best practices and a zero trust philosophy. Verify whatever you see. Double and triple check the source of the message. Do an image search to find the original, if possible.



Page Footer has no content