The third wave of devices
XR technologies and glasses that appear in the world are the third wave of personal
Our world in it is completely enriched with holograms,through which we visualize events and information. These features can be used for design purposes, such as when you want to view a 3D model in the spatial world. Through this system, you can communicate with people, see their hologram, hear them, convey feelings and emotions to them.
Through virtual or mixed reality, you cancreate new objects, design them, create layouts in space based on a 2D drawing. All this can be done with Microsoft HoloLens glasses or devices with similar technology.
Differences between AR, MR and VR
XR is an abbreviation for extended reality, extended reality. It is divided into three categories: virtual reality, mixed reality and augmented reality.
In mixed reality, holographic objects andthe real world interact with each other, there is a mixture of virtual and real objects. In virtual reality, the user is only in the virtual world. In augmented reality, we see virtual objects, holograms, but do not feel their depth. In mixed reality, the user sees and feels depth, which is when the hologram feels like it's real.
Closest to the real world - augmentedreality, like the once popular Pokemon Go app. If we talk about virtual reality, then this year the game Half-Life: Alyx was released, when a person is immersed in a virtual world and interacts with characters there. There are no popular games in mixed reality yet - to immerse yourself in mixed reality, you need a special device, for example, Microsoft HoloLens.
This is a whole laptop:it has classic modules - a battery, processors and special modules, such as optics, sensors for building the surrounding space, analyzing gestures and user behavior.
This process requires special optics, wewe can't just put two identical displays on each eye. After all, if the user is given the same picture, then it will seem that the object is in infinity due to stereo vision. Microsoft HoloLens should take this into account and adjust the image specifically to the position of the glasses, and even HoloLens now have special sensors that detect the position of the pupil and adjust the holograms, depending on where we look.
Cameras in glasses are needed in order to evaluatethe surrounding world so that the hologram can be placed in depth, behind the real object. For this, special sensors are used - these are depth cameras that analyze the space around us and analyze its depth.
There is also a dedicated cameraanalyzes user gestures - modern Microsoft HoloLens glasses can detect hands. This means that you can interact with holograms - take them, move them.
The main function of the cameras is the analysis of spacearound. The user must understand where the wall, table, objects in the room are. They evaluate them by building a grid of space. This is the geometry around where you can put the hologram behind the wall. We can hide a virtual object behind a real wall, and this achieves a sense of depth and the real presence of holograms.
Glasses detect gestures, but what is it for?To interact with holograms - users must be able to click on them, rotate, move, users must have a virtual keyboard that enters text. Although now it is very inconvenient - it turns out to press only one key per second.
Therefore, users need to learn againinteract with the world and interface. No one has had such devices before, so developers need to consider that they must be understandable, similar to interactions in the real world. Everything should happen exactly the same as it usually happens - the user should be able to take a mug in the same way as in ordinary life. And this is a space for even a space for imagination to create new behavioral mechanics.
Where you can apply mixed reality in real life
The focus of modern glasses is the industry.Why not a regular user? Because such devices are expensive - about 3.5 thousand dollars. Only production in critical situations can justify the cost of such a device.
The first solution that can be implemented throughglasses is a virtual assistant, its idea is that in production some operations have instructions of 100-150 assembly steps and technical documentation, which you need to deeply understand. And the idea of such glasses is to digitize this instruction and display all these steps in the form of holograms.
We have developed a prototype of such a solution fora gas exhibition in St. Petersburg, where an instruction for a liquefied natural gas tanker was implemented in the application. The employee in the form of holograms, text and audio hears and sees what he needs to do. Moreover, it is also a good platform for learning not only during the technical process, and such a solution can be done on Microsoft HoloLens glasses.
Another option is a remote assistant whenwe connect the worker wearing glasses and the remote expert. The expert through the 2D interface sees what the worker observes through the camera and can tell him something through audio and video, draw him hints in space. It is attached in real space, and if an expert circled, for example, a crane and said that it needs to be turned, the employee may be distracted, but the hint will not disappear anywhere.
This also includes game engines - Unity Engine and Unreal Engine, which are used to develop games or augmented virtual reality applications.
Interactions with the robot
A robot is a device that is controlled byseveral degrees of freedom. For example, it may have two motors that can be moved and coordinated using programs. There are many such devices in the laboratory of Innopolis University - a drone that does not break when it falls, a humanoid and walking robot similar to a person. There are technical manipulators, they can be used in production. Each of these devices can be interacted with.
What is a robot?
How can I do that?The first example is virtual reality, a VR simulator for industrial manipulators. On it, you can learn how to work with a robot, understand how the device works, you can program it to perform some action and test it in a simulation.
The next solution at the intersection of VR technologies andRobotics is a physical process simulator. For example, an entertaining flight simulator in a spacecraft, where robots simulate a flight, and VR glasses provide a solution and a picture that corresponds to a certain stage of the flight. So the process becomes immersive and more realistic.
Virtual reality is used forteleoperations - when there is a dangerous production, an unsafe environment in which it is difficult for a person to be. Devices can work there, they reduce the danger to humans. But they need to be controlled, the robot can only perform routine operations, and if we are talking about disasters or an unsafe environment, then in this case we need human intelligence. Therefore, telecontrol interfaces are needed - VR will be one of the options for this.
Robots can help during dangerous operations
Application of augmented and mixed realitymost represented in robot programming. The user can see the state of the robot through the device, what he is going to do. Augmented and mixed reality focuses on such solutions. When the robot is installed and needs to be set up for some task, a special remote control is used for this, in which the device can be moved to a specific position, memorized and moved further.
But when the program is completely created, it is difficultimagine if some unforeseen incident will occur. For this, special interfaces are created. This is necessary both for security and for more efficient interaction.
XR and mixed reality help with that.There you can configure the virtual space, adjust it for a specific experiment or change the robot. Engineers have unlimited options for configuration, and this reduces security risks.
Specifically, our solution uses severalvirtual objects. It has a menu to interact with the system, a target point that describes the position where we want the robot to go. Next, there should be virtual models of robots and its devices - a gripper or a tool.
After setting up our points, we can runmotion simulation to check how the trajectory will be executed. Without it, a robot or equipment can be damaged, and when we see a virtual simulation, it is safe and visual for a person.
And after we understand for sure that the program is correct and are sure of it, we can start the work of the real robot and execute the program we need.
How to communicate with a robot
Below you can see how the interaction occursthrough our app. These shots were taken in the laboratory and experimental hall. We interact with two types of robots: the Platoon mobile device and the Kuka IIWA industrial robot.
Users wear Microsoft Hololens glasses,first they analyze the space and then, with the help of gestures, the user begins to interact with the robots, determines their position, sets coordinates and ducks in order for the devices to start moving.
Robots navigate in space using coordinates
For a mobile robot, it is enough to put a doton the floor, this is a 2D coordinate, and for the manipulator, you need to set a 3D coordinate. Each manipulator has its own requirements, and after passing the program, the devices can be simulated. And at the same time, you can immediately clearly see the trajectories, what the robot will do - this cannot be done through a tablet or a regular text editor interface. The meaning of such a system is that through one interface you can interact with different types of robots.
It is these cross-platform technologies that canbe interesting to people who are still in search of what they are interested in. In the case of the intersection of XR and robotics, technologies such as computer vision can be noted - both the glasses and the robot have special cameras to analyze the space. There are also SLAM algorithms for building a map, controllers, a tracking system for device positioning.
Compare how the lunar eclipse was filmed by NASA and Roscosmos
The “fifth force” creates invisible “walls” in the universe. The main thing about the new theory of physicists
Monkeypox infection map published