This research explores the effectiveness of behavioral analysis techniques in identifying synthetically generated media, such as deepfakes. The production of such media is increasingly facilitated by automated methods, particularly through Generative Artificial Intelligence (GAI), which is a subset of Artificial Intelligence (AI).
The rapid advancement of technology has resulted in the creation of increasingly realistic fake content, raising concerns regarding the integrity of communication, the assessment of credibility, and the erosion of public trust. While traditional detection methods primarily rely on forensic or computational analyses, this study introduces a novel interdisciplinary approach that integrates behavioral analysis, computer science, and digital forensics.
The research investigates various indicators, including movement inconsistencies, emotional inconsistencies, and potential behavioral anomalies in deepfake videos.
By analyzing real, synthetic, and hybrid data, it seeks to pinpoint behavioral and contextual cues that current AI models struggle to replicate accurately. Furthermore, it examines the limitations of existing detection techniques and advocates for a multi-channel approach that considers multiple factors.
Utilizing a predominantly qualitative approach, this work aims to lay the groundwork for identifying AI-generated deception, thereby facilitating future research and advancements in AI analysis and modeling. The expected contributions include a deeper insight into the behavioral anomalies of synthetic content, improved reliability and efficiency of detection methods, and recommendations for creating robust tools to address the adverse effects of deepfake technology.