Apple's Neural Network Enough 10 Second Videos to Make a Realistic Deepfake

Apple has developed a NeuMan neural network that learns from short videos and can generate

deepfake video.

To train a neural network, a 10-second video filmed with a moving camera is sufficient. The program extracts an image of a person and the environment from the video.

After that, NeuMan can synthesize clips, onwhich the same character will perform different actions. For example, dancing, somersaulting or jumping. The new video has less sharpness, but in general they look like real footage of poor quality.

Demo video: dancing man. Video: Apple

The main purpose of the program, as indicateddevelopers are applications for augmented reality. They also note that two NeRF (neuroradial radiation) models are used to train the neural network: the first of them studies a person, and the second one studies the background. With the help of these models, the neural network learns the rough geometry of a person and a scene. And then can recreate it in new forms.

Demo video: charging. Video: Apple

At the same time, as the researchers note, the posture that a person takes during shooting does not affect the quality of the finished video.

Read more:

Record coronal mass ejection at Betelgeuse is 400 billion times larger than the sun

Megalodon ate an animal the size of a killer whale at a time

Everest found traces of DNA that should not be there