Deepfakes on Trial: Developing a High-Accuracy, Court-Admissible AI Pipeline for Deepfake Detection in Corporate Fraud Litigation
Academic Level at Time of Presentation
Senior
Major
CNM/Cybersecurity & Digital Forensics
List all Project Mentors & Advisor(s)
Randall Joyce PhD
Presentation Format
Oral Presentation
Abstract/Description
As deepfake technology advances, cybercriminals are increasingly using AI-generated videos and audios to impersonate executives and carry out sophisticated CEO fraud schemes. These synthetic forgeries target human trust and corporate communication systems, creating an urgent need for forensic tools capable of authenticating digital evidence with legal accuracy. This thesis presents a forensic-grade AI deepfake detection pipeline designed for this purpose, emphasizing courtroom admissibility, reproducibility, and evidentiary integrity. Built entirely with free, open-source tools, the framework combines metadata analysis, AI-powered spectrogram analysis, neural artifact detection, and facial manipulation recognition into a transparent workflow that accurately identifies synthetic media. It was trained on Meta’s Deepfake Detection Challenge (DFDC) dataset, which contains professionally produced videos closely resembling typical executive communications in lighting, composition, and audio quality—making it a realistic environment for spotting corporate deepfakes. Each stage produces verifiable outputs—including SHA-256 hash records, ExifTool metadata logs, and interpretable feature traces—that maintain a complete digital chain of custody from input to final judgment. With an accuracy of 91.72% across both audio and visual modes, the model shows that open, explainable AI can provide the forensic reliability needed in legal and corporate investigations. Overall, this work demonstrates how transparent, interpretable AI can be used to counter these adversarial generative models in high-stakes legal settings.
Fall Scholars Week 2025
Honors College Senior Thesis Presentations
Deepfakes on Trial: Developing a High-Accuracy, Court-Admissible AI Pipeline for Deepfake Detection in Corporate Fraud Litigation
As deepfake technology advances, cybercriminals are increasingly using AI-generated videos and audios to impersonate executives and carry out sophisticated CEO fraud schemes. These synthetic forgeries target human trust and corporate communication systems, creating an urgent need for forensic tools capable of authenticating digital evidence with legal accuracy. This thesis presents a forensic-grade AI deepfake detection pipeline designed for this purpose, emphasizing courtroom admissibility, reproducibility, and evidentiary integrity. Built entirely with free, open-source tools, the framework combines metadata analysis, AI-powered spectrogram analysis, neural artifact detection, and facial manipulation recognition into a transparent workflow that accurately identifies synthetic media. It was trained on Meta’s Deepfake Detection Challenge (DFDC) dataset, which contains professionally produced videos closely resembling typical executive communications in lighting, composition, and audio quality—making it a realistic environment for spotting corporate deepfakes. Each stage produces verifiable outputs—including SHA-256 hash records, ExifTool metadata logs, and interpretable feature traces—that maintain a complete digital chain of custody from input to final judgment. With an accuracy of 91.72% across both audio and visual modes, the model shows that open, explainable AI can provide the forensic reliability needed in legal and corporate investigations. Overall, this work demonstrates how transparent, interpretable AI can be used to counter these adversarial generative models in high-stakes legal settings.