AI learned to translate speech into video with sign language

Artificial Intelligence (AI) Model Learned to Create Photorealistic Videos

sign language interpreters. These avatars translate speech into language in real time and can improve user access to dozens of sources.

Ben Sanders of the University of Surrey(UK) and colleagues used a neural network that converts spoken language into sign language. The SignGAN system correlates these signs with a 3D model of the human skeleton. The team trained AI on video of real sign language translators, taught it to create photorealistic video based on images.

IKEA smart devices united by scenarios

Earlier, Google came up with a model that canread sign language during video calls. AI can identify "actively speaking", but ignores the interlocutor if he just moves his hands or head.

Researchers presented a detection systemreal-time sign language. She can distinguish when the interlocutor tries to say something or simply moves his body, head, arms. Scientists note that this task may seem easy for a person, but previously there was no such system in any of the video call services - they all respond to any sound or gesture of a person.

Read also

Satellite images have confirmed that the Earth's climate is changing unevenly

Found the alleged kingdom of the disappeared Hittites. What have archaeologists found?

Quantum nanodiamonds can improve the accuracy of medical tests