The United Arab Emirates University, as part of its activities in the 11th International Government Communication Forum 2022 at the Expo Center Sharjah, launched an innovation for a research team of faculty members at the College of Information Technology, which is a modern technical application called “deep fake” : an image or recording that has been manipulated to misrepresent someone as saying something that has never been said, using certain mathematical algorithms. It is hard to differentiate between the real image and voice and the fake ones that have the same facial expressions and motions.
Dr. Abdelkader Nasreddine Belkacem, Associate Professor in the Department of Computer and Network Engineering at the College of Information Technology, who is the research team supervisor, said that this application can be used by the media that fake facts or news to influence public opinion. It can also be used in educational applications. By evoking the voices and facial expressions of historical characters, it can keep class attention in the educational process.
He added that in this age of technologies, artificial intelligence has shown tremendous success in many areas of technical applications. The “Deepfake” technology is based on an artificial intelligence algorithm, in which one character in a video clip is replaced by another with similar facial characteristics. The basic idea of deep fake is to create a deep learning model based on Generative Adversarial Networks (GAN) and deep autoencoders.
He said that a person’s facial motions and expressions are used to generate realistic expressions and motions of another person that can be easily faked. Despite the availability of many deepfake algorithms, there is a lack of web-based “deep fake” applications that provide a real-time response to users. The project aims to develop a web-based deep fakes app to create deep fakes in near real time for UAE residents.
The student, Abdul Rahman Al Kaabi from the College of Information Technology said that the deep fake algorithm has been developed in three steps: Firstly, extracting faces from the source image and video clip. Secondly, a deep learning-based model such as GAN is used to learn the encoder to convert one face to another. Finally, the model pairs the source image and video frames, and animates the source character image. Unlike other deepfake models, the model used in this project can model complex motions using algorithms in a self-supervised manner. The proposed model learns a representation of motions in the video and reconstructs the deepfake from a single frame. The users can upload photos and videos in a deep fake app.