New Deepfake allows you to edit the speech of the speaker in the video. As easy as a text editor!

Deepfake technology (from deep learning - “deep learning” and fake videos - “fake videos”) works on the basis of

open source algorithms such as TensorFlow, and is trained on videos from YouTube. Then the system tries to substitute the face of the selected person to the source video with maximum realism.

The algorithm developed by scientists, called Neural Rendering. It takes him about 40 minutes to study the facial expressions of the person talking to the video and to match the shape of his face with each phonetic syllable.

After that, the neural network makes up a 3D model of the speaker’s face and allows you to edit what he is saying, changing the speaker’s facial expressions.

Researchers point out that in an ideal worldthe development will reduce the cost of re-shooting failed duplicates. However, scientists refused the idea to publish the neural network code - engineers believe that it can be used to substitute words and phrases into videos that the speaker didn’t really say.

Previously engineers from the University of California andThe University of British Columbia has developed a neural network that allows virtual characters to learn from videos that people move on — for example, from YouTube videos.