Microsoft will make AI accessible to the deaf and visually impaired

New initiatives are aimed at combating the so-called "data desert" - the problem is that algorithms

machine learning do not have sufficientthe amount of learning data and do not develop technologies that are specifically designed for those people who cannot fully use their hearing or vision.

One of the projects, "Object recognition fortraining in invisible images ”, will create a new public dataset of video submitted by blind people. This data will be used to develop algorithms for smartphone cameras. They will recognize objects that people need on a daily basis - a wallet, a face mask, or a transit card. With its help, devices will be able to tell where in the room they are.

Microsoft is also partnering with Team Gleason -an organization that supports people with amyotrophic lateral sclerosis (ALS). They will create an open dataset from photographs of individuals with this neurodegenerative disease. The algorithm will be able to better recognize people with signs of ALS in the future.

The third project team is developinga publicly available dataset for training, validating and testing various labels around. As a result, visually impaired people will be able to point the camera at the text, which will identify the smartphone and voice it to the user. The researchers note that several dozen ideas are still in the development stage - all of them aimed at improving the lives of people for whom AI was practically not used before.

For example, in the future, an autonomous vehicle willidentify the person in the wheelchair and stop. At the same time, a predictive hiring system will not downgrade candidates with disabilities, as they differ from the “ideal employee” model trained in the AI.

Read also

There may be universes in black holes. We tell you about the new discovery

On day 3 of illness, most COVID-19 patients lose their sense of smell and often suffer from a runny nose

Research: 15 million tons of microplastics found on the ocean floor