Mel Evans

Research indicates that deepfakes – the editing and/or manipulation of audio and visual images to misrepresent individuals or spread misinformation – have become increasingly predominant in recent years (Nguyen et al 2019; Verdoliva, 2020).

This study aimed to assess whether the human-based deception detection system proposed by Archer and Lansley (2015), Six Channel Analysis System (SCAnS), can assist in the identification of digital deception used in deepfakes. Deepfake videos from the Face Forensics++ research database were analysed to identify incongruities that may indicate the presence of a deepfake video using modified SCAnS system (Face, Body , Psychophysiology only).

An additional channel of communication was then created, ‘T’ or technical channel in order to notate additional Points of Interest (PIns) not covered by the original SCAnS system. 20 videos were selected at random and downloaded from the Face Forensics++ DeepFake Research Library (10 manipulated and 10 unmanipulated) and coded by two trained EIA coders using Kinnovea software. Their instinctive reaction on the status of each video (manipulated or unmanipulated) was also independently recorded. Additional points of interest (PIns) that are not included in the original SCAnS were also noted.

Results from both independent coders were then correlated and statistically analysed.

Findings were then discussed and critiqued; and recommendations made that may facilitate subsequent improvements in the digital identification of deepfake videos.