Recently, the DALL-E 2 neural network model was presented, capable of generating pictures based on description. Besides
Blogger Denis Shiryaev and the author of the Neural Networks and Blender Telegram channel have published a selection of such paintings. In total, the model generated eight variants of images, and bloggers chose the best of them.
Here are the most striking examples:
eighteen
So far, DALL-E 2 is only available to a limited number of testers. You can, however, apply for testing on the site.
DALL-E 2 "learned" the relationship between imagesand the texts used to describe them. The model used a process called diffusion, where everything starts with a pattern of random dots and then gradually changes towards the image as specific aspects of it are recognized.