New Deepfake allows you to edit the speech of the speaker in the video. As easy as a text editor!

Deepfake technology worksBased on

open-source algorithms such as TensorFlow, and is trained from YouTube videos.The system then tries to substitute the face of the selected person to the original video as realistically as possible.

The algorithm developed by scientists, called Neural Rendering. It takes him about 40 minutes to study the facial expressions of the person talking to the video and to match the shape of his face with each phonetic syllable.

After that, the neural network makes a 3D model of the speaker's face and allows you to edit what he says, while changing the speaker's facial expressions.

Researchers point out that in an ideal worldthe development will reduce the cost of re-shooting failed duplicates. However, scientists refused the idea to publish the neural network code - engineers believe that it can be used to substitute words and phrases into videos that the speaker didn’t really say.

Previously engineers from the University of California andThe University of British Columbia has developed a neural network that allows virtual characters to learn from videos that people move on — for example, from YouTube videos.