In recent decades, artificial intelligence has performed well in many areas of science and technology.
Neural networks, whether real orartificial, learn by tuning connections between neurons. By making them stronger or weaker, some neurons become more active, some less active, until a certain pattern of activity appears. We call this pattern "memory". The AI strategy is to use complex and lengthy algorithms that iteratively tune and optimize connections between neurons. The brain makes this much easier: each connection between neurons only changes depending on how active two neurons are at the same time. For a long time, it was believed that compared to the AI algorithm, this allows you to store less memory.
A new study shows a different picture:when a relatively simple strategy used by the brain to alter neural connections is combined with biologically plausible models of the response of individual neurons, then such a strategy works just as, or even better, than AI algorithms.
The reason for this paradox lies in the introductionerrors: when memory is efficiently retrieved, it can be identical or correlated with the original input to be remembered. The brain's strategy results in the extraction of memories that are not identical to the original inputs, suppressing the activity of those neurons that are barely active in each pattern. These damped neurons don't really play a critical role in distinguishing between different memories stored on the same network. Ignoring them, neural resources focus on those neurons that are important for the input to be remembered and provide higher throughput.
Overall, this study highlights howbiologically plausible self-organizing learning procedures can be as effective as slow and implausible learning algorithms.
See also:
Abortion and science: what will happen to the children who will give birth
Earth will reach critical temperature in 20 years
In space, they found gravitational waves that change space and time. What does it mean?