Deepfakes are threatening privacy and security. Detection methods using deep learning aim to combat this but there’s a long way to go.
It started in 2017 when Reddit users uploaded sexually explicit videos using the faces of women celebrities superimposed onto adult film actresses' bodies. The emergence of such disturbing videos was a result of a sophisticated deep learning algorithm known as “deepfake”.
It sent shockwaves through the world, threatening privacy and societal security. Within a year, Reddit and other online platforms banned these deepfake porn videos. But the problem does not stop there. This advanced technique enables the replacement of one person's likeness with another's. Some popular examples include applying filters to alter facial attributes; swapping faces with celebrities; transferring facial expressions; or generating a new selfie based on an original photo.
The malicious use of deepfakes has the potential to cause severe psychological harm and tarnish reputations, marking it as a powerful tool capable of inciting social panic and threatening world peace.
The development of robust deepfake detectors which are trained to identify the distinct features that distinguish a fake image from a real one, is crucial.
Traditional approaches focus on analysing inconsistencies in pixel distribution, leading to unusual biometric artefacts and facial textures, such as unnatural skin texture, odd shadowing, and abnormal placement of facial attributes, which serve as key indicators for deepfake detection.