From vision to reality, explore our blog and articles. Contact us to turn your ideas into success.
Contact us.
By Next SolutionLab on 2024-10-22 23:08:31
In recent years, deep learning has seen tremendous advancements across various domains such as robotics, computer vision, and healthcare. However, with the rise of publicly accessible datasets, this technology has also been used for unethical purposes, including the creation of deepfakes — videos where facial manipulation is used to create highly realistic yet fake content. These deepfakes pose a severe threat by spreading misinformation, facilitating cyberbullying, and disrupting social peace and security.
In response, this project addresses the growing threat of deepfakes by developing a detection system using a siamese network architecture combined with an ensemble-based metric learning approach to identify manipulated videos. Multiple models, starting from a base network, are used to detect facial manipulation, and the system has been rigorously tested on datasets like DFDC, FaceForensics++, and Celeb-DF (v2), showing strong results in both self- and cross-testing scenarios. A Streamlit-powered web interface allows users to upload videos, detect deepfakes, and view annotated results, offering an intuitive tool to ensure the integrity of visual media.
Now, we will take a deep dive into a deep learning technology (i.e., Deepfakes).
A deepfake is an AI-driven method used to fabricate or alter audio, video, or other digital media to make it seem as though it was created by a different person or entity. This process commonly utilizes deep learning models, especially Generative Adversarial Networks (GANs), to produce convincing and highly realistic fake content.
Most of the time, deepfakes are generated using an artificial intelligence technique known as Generative Adversarial Networks (GANs). GANs involve two neural networks, the generator and the discriminator, that are trained together in a competitive process to create and detect fake content. The following figure demostrate a GAN architecture:
Identifying deepfakes can be challenging, but several techniques can aid in detection:
We propose a deep learning-based architecture designed to effectively address the challenges associated with deepfake detection. This architecture leverages advanced techniques to improve the identification of manipulated content, ensuring robust performance across various datasets and scenarios. Our implemented method is as follows:
Set up the project on your local machine by following the instructions below. You can also run the demo on Next_Deepfake_Detection
conda create -n deepfake
conda activate deepfake
git clone https://github.com/NSL/
cd Deepfake_Detection_System
pip install -r requirements.txt
After running the requirements, prerequisites and installation scripts, the directory structure of 'Deepfake_Detection_System' is as follows
|-- assets # contains images & gifs for readme
|-- models # contains .pth files.
| |-- dfdc_v2.pth
| |-- dfdc_v2st.pth
| |-- dfdc_vit.pth
| |-- dfdc_vitst.pth
|-- README.md
|-- requirements.txt
|-- sample_images # contains sample images from test set of ffpp, celebdf & dfdc dataset.
|-- sample_output_videos # contains sample output videos that are obtained after running the code
|-- sample_videos # contains all the sample testing videos
|-- src
|-- architectures # contains definitions of models
|-- blazeface # for face extraction
|-- audio
|-- uploads
|-- output # contains the annotated video files generated by running spot_deepfakes.py
| |-- abc.avi # annotated video with frame-level predictions done by the ensemble of models for sample_videos/abc.mp4
| |-- pqr.avi # annotated video with frame-level predictions done by the ensemble of models for sample_videos/pqr.mp4
|-- spot_deepfakes.py # main()
|-- run.py # streamlit main function()
|-- utils # contains functions for extraction of faces from videos in sample_videos, loading models, ensemble of models and annotation
|-- single_video_check.py # contains necessary functions to check a single video whether it is generated or not!!!
1. Run the following commands:
cd src
2. Now, activate the enviroment and run the following python commands:
conda activate deepfake
streamlit run run.py
After opening the Streamlit interface, upload a video. Then click on the Check Video
button. Finally, the output will be shown.
1️⃣st Sample
For a given input video, the output video is displayed as follows:
Given Fake Video Identified Fake Video
2️⃣nd Sample