Deepfake Detection System: AI-powered Authenticity Verifier




By Next SolutionLab on 2024-10-22 23:08:31

 

Project Overview​

 

In recent years, deep learning has seen tremendous advancements across various domains such as robotics, computer vision, and healthcare. However, with the rise of publicly accessible datasets, this technology has also been used for unethical purposes, including the creation of deepfakes — videos where facial manipulation is used to create highly realistic yet fake content. These deepfakes pose a severe threat by spreading misinformation, facilitating cyberbullying, and disrupting social peace and security. 

In response, this project addresses the growing threat of deepfakes by developing a detection system using a siamese network architecture combined with an ensemble-based metric learning approach to identify manipulated videos. Multiple models, starting from a base network, are used to detect facial manipulation, and the system has been rigorously tested on datasets like DFDC, FaceForensics++, and Celeb-DF (v2), showing strong results in both self- and cross-testing scenarios. A Streamlit-powered web interface allows users to upload videos, detect deepfakes, and view annotated results, offering an intuitive tool to ensure the integrity of visual media.

 

Getting Started

Now, we will take a deep dive into a deep learning technology (i.e., Deepfakes).

 

What is Deepfake?

A deepfake is an AI-driven method used to fabricate or alter audio, video, or other digital media to make it seem as though it was created by a different person or entity. This process commonly utilizes deep learning models, especially Generative Adversarial Networks (GANs), to produce convincing and highly realistic fake content.

 
Left: Real footage of Barack Obama. Right: Simulated footage using new Deep Video Portraits technology.

How Deepfakes are Created?

Most of the time, deepfakes are generated using an artificial intelligence technique known as Generative Adversarial Networks (GANs). GANs involve two neural networks, the generator and the discriminator, that are trained together in a competitive process to create and detect fake content. The following figure demostrate a GAN architecture:

 
GAN architecture to generate deepfakes.

What are the Problems with Deepfakes?

Key Problems Associated with Deepfakes

  1. Misinformation and Disinformation: Deepfakes can spread false information, undermining public trust in media and contributing to the dissemination of fake news.
  2. Identity Theft and Impersonation: Malicious actors can impersonate individuals, leading to privacy violations, reputational damage, and potential fraud.
  3. Political Manipulation: Deepfakes can create deceptive videos of politicians or public figures, influencing elections and shaping public opinion through false narratives.
  4. Cyberbullying and Harassment: Deepfakes can be used to create harmful content aimed at bullying or defaming individuals, causing emotional distress.
  5. Erosion of Trust: The rise of deepfakes can lead to skepticism about all video content, making it challenging for individuals to discern what is real and what is fake.

 

How deepfakes can be identified?

Methods for Detecting Deepfakes

Identifying deepfakes can be challenging, but several techniques can aid in detection:

  1. Artifact Analysis: Look for inconsistencies, such as unnatural blinking or distortions in the background.
  2. Facial and Lip-Syncing Errors: Check for discrepancies between audio and visual elements, particularly in lip movements.
  3. Inconsistent Lighting and Shadows: Observe for lighting inconsistencies that are not typical in natural footage.
  4. Unnatural Eye Movements: Watch for unusual eye movements that differ from real human behavior.
  5. Blur or Glitches: Look for blurriness or glitches around facial edges or detailed areas.
  6. Detection Tools: Utilize specialized software that employs machine learning algorithms for analysis.
  7. Source Verification: Confirm the authenticity of the original content source to identify potential deepfakes.
  8. As deepfake technology advances, detection methods also improve, with ongoing research enhancing identification techniques.

 

Our Implemented Solution

We propose a deep learning-based architecture designed to effectively address the challenges associated with deepfake detection. This architecture leverages advanced techniques to improve the identification of manipulated content, ensuring robust performance across various datasets and scenarios. Our implemented method is as follows:

 

 

Setup and Installation Guide

Set up the project on your local machine by following the instructions below. You can also run the demo on Next_Deepfake_Detection [Check Here]

Built With

Installation

  1. Create a python virtual environment (anaconda)
  2. conda create -n deepfake
    conda activate deepfake
  3. Clone the repo
  4. git clone https://github.com/NSL/
  5. Install dependencies
  6. cd Deepfake_Detection_System
    pip install -r requirements.txt

 

Project File Structure

After running the requirements, prerequisites and installation scripts, the directory structure of 'Deepfake_Detection_System' is as follows


|-- assets # contains images & gifs for readme
|-- models # contains .pth files. 
|   |-- dfdc_v2.pth
|   |-- dfdc_v2st.pth 
|   |-- dfdc_vit.pth 
|   |-- dfdc_vitst.pth 
|-- README.md
|-- requirements.txt
|-- sample_images # contains sample images from test set of ffpp, celebdf & dfdc dataset.        
|-- sample_output_videos # contains sample output videos that are obtained after running the code 
|-- sample_videos # contains all the sample testing videos 
|-- src
    |-- architectures # contains definitions of models
    |-- blazeface # for face extraction
    |-- audio
    |-- uploads
    |-- output # contains the annotated video files generated by running spot_deepfakes.py
    |   |-- abc.avi # annotated video with frame-level predictions done by the ensemble of models for sample_videos/abc.mp4
    |   |-- pqr.avi # annotated video with frame-level predictions done by the ensemble of models for sample_videos/pqr.mp4
    |-- spot_deepfakes.py # main()
    |-- run.py # streamlit main function()
    |-- utils # contains functions for extraction of faces from videos in sample_videos, loading models, ensemble of models and annotation
    |-- single_video_check.py # contains necessary functions to check a single video whether it is generated or not!!!

 

Usage

For videos

1. Run the following commands:

cd src 

2. Now, activate the enviroment and run the following python commands:

conda activate deepfake
streamlit run run.py

After opening the Streamlit interface, upload a video. Then click on the Check Video button. Finally, the output will be shown.

 

Sample Output

1️⃣st Sample

The user interface is as follows


User interface with an example.

For a given input video, the output video is displayed as follows:

​                                      

                                                   Given Fake Video                                                                                                           Identified Fake Video

 

2️⃣nd Sample

The user interface is as follows


User interface with an example.

For a given input video, the output video is displayed as follows:


​                                      

                                                   Given Real Video                                                                                                           Identified Real Video

Based on the sample output, the system detects whether a given video contains deepfake data and provides frame-by-frame annotations, indicating which frames appear to be fake and which seem real.

 

Features

 

  • Video Upload: Users can upload a video for deepfake analysis.
  • Frame-wise Annotation: The system provides detailed results indicating which frames appear to be real or fake.
  • Streamlit Web Interface: A user-friendly web interface to interact with the model.
  • Model Architecture: Utilizes a siamese network and ensemble-based metric learning for accurate deepfake detection.

 

Applications

Deepfake detection has numerous applications across various industries:

  1. Social Media: Prevents the spread of misleading content.
  2. Media Verification: Ensures video authenticity in journalism.
  3. Cybersecurity: Protects against impersonation and identity theft.
  4. Legal Evidence: Verifies the authenticity of video evidence.
  5. Celebrity & Brand Protection: Guards against unauthorized use of likeness.
  6. Video Conferencing: Prevents fraud in virtual meetings.
  7. Financial Services: Secures video-based customer verification.
  8. Online Dating: Detects fake identities in social interactions.
  9. Education: Raises awareness of deepfake risks.
  10. National Security: Defends against disinformation campaigns.

 

References

 

Citations

@INPROCEEDINGS{9862825,
  author={Nehate, Chinmay and Dalia, Parth and Naik, Saket and Bhan, Aditya},
  booktitle={2022 IEEE India Council International Subsections Conference (INDISCON)}, 
  title={Exposing DeepFakes using Siamese Training}, 
  year={2022},
  volume={},
  number={},
  pages={1-6},
  doi={10.1109/INDISCON54605.2022.9862825}}
​
@inproceedings{coccomini2022combining,
  title={Combining efficientnet and vision transformers for video deepfake detection},
  author={Coccomini, Davide Alessandro and Messina, Nicola and Gennaro, Claudio and Falchi, Fabrizio},
  booktitle={International conference on image analysis and processing},
  pages={219--229},
  year={2022},
  organization={Springer}
}

Let us know your interest

At Next Solution Lab, we are dedicated to transforming experiences through innovative solutions. If you are interested in learning more about how our projects can benefit your organization.

Contact Us