Deepfake technology (from deep learning - “deep learning” and fake videos - “fake videos”) works on the basis of
The algorithm developed by scientists, called Neural Rendering. It takes him about 40 minutes to study the facial expressions of the person talking to the video and to match the shape of his face with each phonetic syllable.
After that, the neural network makes up a 3D model of the speaker’s face and allows you to edit what he is saying, changing the speaker’s facial expressions.
Researchers point out that in an ideal worldthe development will reduce the cost of re-shooting failed duplicates. However, scientists refused the idea to publish the neural network code - engineers believe that it can be used to substitute words and phrases into videos that the speaker didn’t really say.
Previously engineers from the University of California andThe University of British Columbia has developed a neural network that allows virtual characters to learn from videos that people move on — for example, from YouTube videos.