AI has emerged that will improve the vision of robots without expensive sensors

Scientists from the AIRI Institute presented at an international scientific conference on virtual and

augmented reality ISMAR 2022 in Singapore AI models to build “depth maps”. The technology helps improve computer vision without the use of expensive sensors.

Depth analysis is one of the main taskscomputer vision applications. In order for the robot to be able to navigate in space, and the augmented reality filter is superimposed on the desired image, the system must correctly estimate the distances to each object in the frame.

As a rule, depth maps are built on the basis ofinformation coming from special sensors. The most popular of these is lidar. This device directs a beam of light and measures the time it takes for the reflection to return. Disadvantages of this technology: limited range and high cost of sensors. RGB cameras are also used as an alternative. This method is used in the development of various AR applications for smartphones.

The new technology combined different approaches tosolution of the problem of depth estimation. Researchers have developed models that use global spatial information to create the most accurate maps. The proposed model combines the advantages of transformers and convolutional neural networks. The authors note that the model is self-learning and does not need data from depth sensors.

The developers reported that the proposed methodpassed the performance evaluation on independent datasets and showed one of the best results in the world. Information about models and methods is promised to be made publicly available soon.

Read more:

NASA revealed the origin of Haumea - the most mysterious planet in the solar system

The "dark matter" of the genome hid cancer treatment: what scientists found there

Five million years of death: why the "Great Dying" really took so long