The neural network learned how to create complex images by text description

Researchers have developed a generative-adversary neural network, which is very similar to the

earlier Microsoft AttGAN algorithm. The difference is that when creating images based on text, a new neural network focuses on objects - in other words, it analyzes the necessary text and lays out objects from the library on the finished image.

The algorithm was trained on 328 thousand objects with text description collected in COCO dataset.

The study says that the presented neural network copes better with the creation of text-based description of complex objects that contain many small details.

Earlier, Facebook AI laboratory introduced artificial intelligence, which can generate a recipe for this dish from food photos.