September 26, 2019

Are Deepfakes the Future of Identity Theft?

Committing identity theft or fraud may have just gotten easier. Are deepfakes the future of cybercrimes?

deepfake obama example

Source: Wired.com

Could someone be impersonating you?

With today’s technology, cybercriminals have a multitude of tricks up their sleeves to use to obtain your personal information. New programs, apps, and software can give them access into your life like never before.

But – what if they could actually go as far as to actually “become” you?

Deepfakes are a new tool that hackers are using to commit crimes. Educate yourself before you are next.

Learn about what deepfakes are and how deepfakes affect identity theft and fraud.

What Are Deepfakes?

A deepfake is a fake video and audio recording that is meant to sound and look real

This type of misleading footage was once only used for movies or television shows. Now, with our increasingly digital world, anyone with a computer, editing skills, and the proper editing software, can create their own deep fake.

“Deepfake” is a word that combines the terms “deep learning” and “fake.” Deep learning is a type of machine learning that teaches machines to learn through examples. 

Deep learning is frequently used in today’s world. It is behind a lot of the technology we use every day such as…

  • Voice-controlled devices. Devices like Siri or Alexa are considered a form of deep learning because the longer you speak to them, the more they learn.
  • Self-driving cars. Self-driving cars use deep learning methods to indicate stop signs, speed limits, and pedestrians.
  • Facial recognition. When certain devices or apps can recognize your face and facial expressions, deep learning is being used.
  • Social media newsfeeds. Do you ever wonder why social platforms like Facebook and Instagram are serving you specific ads? These social networks use deep learning methods to learn what you like to see.

 

AI machine learning

The nickname “deepfake” was created by a user on Reddit, a popular social media and forum platform, in 2017.

The users on Reddit were the ones that first created the deepfake software that allows anyone to create these videos. The creation of deepfakes began as a joke, but has become increasingly more dangerous as the videos can make anyone appear to be doing anything.

How do You Create a Deepfake Video?

With the new software, anyone can create a deepfake video.

But – how exactly are they created?

First, creating a deepfake video requires a lot research. You must compile an extensive amount of photos and videos of the person you would like to create the deepfake of. This means that people in the public eye, like celebrities, politicians, and others, are more susceptible to deepfakes.

After collecting the photos and videos, they are uploaded to the deepfake software. The software will scan the photos and look for consistencies with the photos of the person and what you want to “fake” them doing.

For example, if you would like to make the person in the video say something that they never said, the program will have to learn what facial movements and mouth shapes the person makes when saying those words in order to fake it.

Making a Deepfake Video

To make these deepfake videos appear even more realistic, a method called “generative adversarial networks,” or GAN, is used. This method makes deepfake videos even more realistic because it forces the video software to continuously improve the video until it is “statistically precise.” 

Currently, one of the most popular deepfake applications is an app called “FakeApp.” This app uses open-source Google software to create the deepfake. Fakeapp has over 120,000 active users. 

To show you how easily these videos can be made and how real they appear, comedian Jordan Peele created his own deepfake video. This video fakes a presidential address from President Obama. Watch the full video below to see how shockingly accurate deepfake videos can be.

Deepfakes and Identity Theft and Fraud

Due to the quality of deepfake videos, the threat of them being used as a tactic for identity theft and fraud is growing more and more likely.

Most of the deepfake videos that have been created recently involve “celebrity” impersonations. However, social media makes it easier for hackers and cybercriminals to impersonate an everyday person.

In fact, when it comes to deepfakes, The House Intelligence Committee Rep. Adam B. Schiff says “I don’t think we’re well prepared at all. And I don’t think the public is aware of what’s coming” in regards to deepfakes and their aftereffects. 

Deepfakes are especially dangerous in professional or educational environments. 

Due to the nature of deepfakes, those in high-level roles can be impersonated so that cybercriminals can gain access to sensitive information. 

Cybercriminal

 

For example, using the photos and information found on the internet and social media, hackers can trick employees into giving them information that would allow them to easily hack into the business’ entire system. This results in large-scale hacks and data breaches.

This also has the potential to occur on a much smaller scale as well. Imagine getting a video from a friend or loved one where it appears that they are in trouble and in need of money, or other assistance, to help them.

Are most of us ready and able to tell the difference between what is real and what is fake?

With the growing usage of deepfake videos, the question is: is this type of “impersonation” considered a form of identity theft or fraud?

Are There Laws against Deepfakes?

Depending on the situation that the deepfake was used in, laws that already exist can apply. However, there is nothing in place that protects against deepfakes specifically.

Some examples of current laws that can protect you against deepfakes include:

Extortion

Extortion is when someone uses a threat of harm or violence to gain property or money. In the case of deepfakes, if you paid money to have the deepfake stopped, that is considered an act of extortion or cyber extortion. In some cases, the deepfake may have also been used as a form of blackmail to gain something, which is also considered extortion.

Harassment

Harassment is any action that demeans, threatens or offends someone and results in a hostile environment for the victim. If a deepfake video threatens you or creates a hostile environment, this is considered harassment.

False Light

False Light is when something ruins a person’s reputation in the eyes of the public. The most common things that fall into the “False Light” category are photo manipulation, embellishment, and distortion, which deepfakes would fall under.

Intentional Infliction of Emotional Distress

Intentional Infliction of Emotional Distress occurs when someone does something to cause the defendant to suffer severe emotional distress. In other words, if a deepfake causes emotional distress, this law can be used against it.

There are not many laws in place to protect against deepfake videos. Laws need to be put in place to protect American citizens against deepfakes.

 

There you have it! Now you know what to look out for and how to protect yourself against deepfakes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top

Medicare Supplement Breach Alert!

MedicareSupplement.com has exposed the records of 5 million Americans 🔓

Call 855-717-4608 TODAY to see if your identity is AT RISK of being stolen!

 

American Identity Group

AT&T Breach Alert!

AT&T employees have planted malware on the company’s internal systems to unlock cell phones 📱

Illegal hardware was installed within the network to help hackers gain remote access and unlock millions of smartphones.

Call 855-717-4608 TODAY to see if your information is SAFE!

American Identity Group

State Farm Breach Alert!

STATE FARM has suffered from a Credential Stuffing data breach 🔓💰

A list of user IDs and passwords were obtained from the dark web in to attempt to access to STATE FARM online accounts.

If you have a STATE FARM account, your information may be in the WRONG hands!

Give us a CALL NOW at 855-717-4608 to find out.

American Identity Group

Capital One Data Breach Alert!

A hacker has gained access to the accounts of 100 million Capital One customers 🔓If you use Capital One, your information may be in the WRONG hands!

Find out if your Identity has been compromised!

CALL NOW at 855-717-4608 to find out.

verified by American Identity Group

Google Home Speaker Data Breach Alert!

Your GOOGLE HOME SPEAKER may be recording your conversations and leaking your information without your knowledge 📢

Google regularly records and listens to private conversations via the GOOGLE HOME SPEAKER when you aren’t using the speaker or asking it a question.

Give us a CALL NOW at 855-717-4608 to protect yourself today.

verified by American Identity Group